Sep 12 23:48:20.838559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:48:20.838580 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Sep 12 22:15:14 -00 2025 Sep 12 23:48:20.838590 kernel: KASLR enabled Sep 12 23:48:20.838596 kernel: efi: EFI v2.7 by EDK II Sep 12 23:48:20.838601 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 12 23:48:20.838606 kernel: random: crng init done Sep 12 23:48:20.838613 kernel: secureboot: Secure boot disabled Sep 12 23:48:20.838619 kernel: ACPI: Early table checksum verification disabled Sep 12 23:48:20.838625 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 12 23:48:20.838632 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:48:20.838638 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838643 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838649 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838655 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838662 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838669 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838675 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838681 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838687 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:20.838693 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 23:48:20.838699 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 23:48:20.838705 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:20.838711 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 12 23:48:20.838717 kernel: Zone ranges: Sep 12 23:48:20.838723 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:20.838730 kernel: DMA32 empty Sep 12 23:48:20.838736 kernel: Normal empty Sep 12 23:48:20.838742 kernel: Device empty Sep 12 23:48:20.838748 kernel: Movable zone start for each node Sep 12 23:48:20.838754 kernel: Early memory node ranges Sep 12 23:48:20.838760 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 12 23:48:20.838765 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 12 23:48:20.838772 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 12 23:48:20.838778 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 12 23:48:20.838784 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 12 23:48:20.838790 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 12 23:48:20.838796 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 12 23:48:20.838803 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 12 23:48:20.838809 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 12 23:48:20.838815 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 23:48:20.838823 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 23:48:20.838830 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 23:48:20.838836 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 23:48:20.838844 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:20.838850 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 23:48:20.838911 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 12 23:48:20.838921 kernel: psci: probing for conduit method from ACPI. Sep 12 23:48:20.838928 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:48:20.838934 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:48:20.838941 kernel: psci: Trusted OS migration not required Sep 12 23:48:20.838947 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:48:20.838954 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:48:20.838960 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 23:48:20.838970 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 23:48:20.838977 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 23:48:20.838983 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:48:20.838989 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:48:20.838996 kernel: CPU features: detected: Spectre-v4 Sep 12 23:48:20.839002 kernel: CPU features: detected: Spectre-BHB Sep 12 23:48:20.839008 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:48:20.839015 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:48:20.839021 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:48:20.839027 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:48:20.839034 kernel: alternatives: applying boot alternatives Sep 12 23:48:20.839041 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:48:20.839049 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:48:20.839056 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:48:20.839062 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:48:20.839069 kernel: Fallback order for Node 0: 0 Sep 12 23:48:20.839075 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 23:48:20.839081 kernel: Policy zone: DMA Sep 12 23:48:20.839088 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:48:20.839094 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 23:48:20.839100 kernel: software IO TLB: area num 4. Sep 12 23:48:20.839107 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 23:48:20.839113 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 12 23:48:20.839121 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:48:20.839127 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:48:20.839134 kernel: rcu: RCU event tracing is enabled. Sep 12 23:48:20.839141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:48:20.839153 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:48:20.839171 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:48:20.839178 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:48:20.839184 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:48:20.839191 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:48:20.839197 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:48:20.839203 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:48:20.839212 kernel: GICv3: 256 SPIs implemented Sep 12 23:48:20.839219 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:48:20.839225 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:48:20.839231 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:48:20.839238 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 23:48:20.839244 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:48:20.839250 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:48:20.839257 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:48:20.839263 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:48:20.839270 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 23:48:20.839276 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 23:48:20.839283 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:48:20.839291 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:20.839297 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:48:20.839304 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:48:20.839310 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:48:20.839317 kernel: arm-pv: using stolen time PV Sep 12 23:48:20.839323 kernel: Console: colour dummy device 80x25 Sep 12 23:48:20.839330 kernel: ACPI: Core revision 20240827 Sep 12 23:48:20.839337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:48:20.839343 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:48:20.839350 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:48:20.839358 kernel: landlock: Up and running. Sep 12 23:48:20.839364 kernel: SELinux: Initializing. Sep 12 23:48:20.839371 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:48:20.839378 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:48:20.839384 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:48:20.839391 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:48:20.839398 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:48:20.839405 kernel: Remapping and enabling EFI services. Sep 12 23:48:20.839411 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:48:20.839423 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:48:20.839430 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:48:20.839437 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 23:48:20.839445 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:20.839452 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:48:20.839459 kernel: Detected PIPT I-cache on CPU2 Sep 12 23:48:20.839466 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 23:48:20.839473 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 23:48:20.839481 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:20.839488 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 23:48:20.839495 kernel: Detected PIPT I-cache on CPU3 Sep 12 23:48:20.839502 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 23:48:20.839509 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 23:48:20.839515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:20.839522 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 23:48:20.839529 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:48:20.839536 kernel: SMP: Total of 4 processors activated. Sep 12 23:48:20.839544 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:48:20.839552 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:48:20.839558 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:48:20.839565 kernel: CPU features: detected: Common not Private translations Sep 12 23:48:20.839572 kernel: CPU features: detected: CRC32 instructions Sep 12 23:48:20.839579 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:48:20.839586 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:48:20.839593 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:48:20.839600 kernel: CPU features: detected: Privileged Access Never Sep 12 23:48:20.839608 kernel: CPU features: detected: RAS Extension Support Sep 12 23:48:20.839615 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:48:20.839622 kernel: alternatives: applying system-wide alternatives Sep 12 23:48:20.839629 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 23:48:20.839636 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 12 23:48:20.839643 kernel: devtmpfs: initialized Sep 12 23:48:20.839650 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:48:20.839657 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:48:20.839664 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:48:20.839673 kernel: 0 pages in range for non-PLT usage Sep 12 23:48:20.839680 kernel: 508560 pages in range for PLT usage Sep 12 23:48:20.839686 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:48:20.839693 kernel: SMBIOS 3.0.0 present. Sep 12 23:48:20.839700 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 23:48:20.839707 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:48:20.839714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:48:20.839721 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:48:20.839728 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:48:20.839736 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:48:20.839743 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:48:20.839750 kernel: audit: type=2000 audit(0.028:1): state=initialized audit_enabled=0 res=1 Sep 12 23:48:20.839757 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:48:20.839764 kernel: cpuidle: using governor menu Sep 12 23:48:20.839770 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:48:20.839777 kernel: ASID allocator initialised with 32768 entries Sep 12 23:48:20.839784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:48:20.839791 kernel: Serial: AMBA PL011 UART driver Sep 12 23:48:20.839799 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:48:20.839806 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:48:20.839813 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:48:20.839820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:48:20.839827 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:48:20.839834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:48:20.839841 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:48:20.839848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:48:20.839854 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:48:20.839863 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:48:20.839870 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:48:20.839877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:48:20.839884 kernel: ACPI: Interpreter enabled Sep 12 23:48:20.839890 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:48:20.839897 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:48:20.839904 kernel: ACPI: CPU0 has been hot-added Sep 12 23:48:20.839911 kernel: ACPI: CPU1 has been hot-added Sep 12 23:48:20.839918 kernel: ACPI: CPU2 has been hot-added Sep 12 23:48:20.839925 kernel: ACPI: CPU3 has been hot-added Sep 12 23:48:20.839933 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:48:20.839940 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 23:48:20.839947 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:48:20.840096 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:48:20.840217 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:48:20.840286 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:48:20.840345 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:48:20.840408 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:48:20.840417 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:48:20.840424 kernel: PCI host bridge to bus 0000:00 Sep 12 23:48:20.840490 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:48:20.840545 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:48:20.840598 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:48:20.840650 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:48:20.840728 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:48:20.840800 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:48:20.840863 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 23:48:20.840924 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 23:48:20.840984 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:48:20.841044 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 23:48:20.841104 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 23:48:20.841193 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 23:48:20.841254 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:48:20.841308 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:48:20.841362 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:48:20.841371 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:48:20.841378 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:48:20.841385 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:48:20.841394 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:48:20.841401 kernel: iommu: Default domain type: Translated Sep 12 23:48:20.841408 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:48:20.841415 kernel: efivars: Registered efivars operations Sep 12 23:48:20.841422 kernel: vgaarb: loaded Sep 12 23:48:20.841428 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:48:20.841435 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:48:20.841442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:48:20.841449 kernel: pnp: PnP ACPI init Sep 12 23:48:20.841516 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:48:20.841527 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:48:20.841534 kernel: NET: Registered PF_INET protocol family Sep 12 23:48:20.841541 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:48:20.841548 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:48:20.841555 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:48:20.841562 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:48:20.841569 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:48:20.841577 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:48:20.841585 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:48:20.841592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:48:20.841598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:48:20.841605 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:48:20.841612 kernel: kvm [1]: HYP mode not available Sep 12 23:48:20.841619 kernel: Initialise system trusted keyrings Sep 12 23:48:20.841626 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:48:20.841633 kernel: Key type asymmetric registered Sep 12 23:48:20.841641 kernel: Asymmetric key parser 'x509' registered Sep 12 23:48:20.841648 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 23:48:20.841655 kernel: io scheduler mq-deadline registered Sep 12 23:48:20.841662 kernel: io scheduler kyber registered Sep 12 23:48:20.841669 kernel: io scheduler bfq registered Sep 12 23:48:20.841676 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:48:20.841682 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:48:20.841690 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:48:20.841750 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 23:48:20.841762 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:48:20.841769 kernel: thunder_xcv, ver 1.0 Sep 12 23:48:20.841776 kernel: thunder_bgx, ver 1.0 Sep 12 23:48:20.841783 kernel: nicpf, ver 1.0 Sep 12 23:48:20.841790 kernel: nicvf, ver 1.0 Sep 12 23:48:20.841866 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:48:20.841924 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:48:20 UTC (1757720900) Sep 12 23:48:20.841934 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:48:20.841943 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 23:48:20.841950 kernel: watchdog: NMI not fully supported Sep 12 23:48:20.841957 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:48:20.841963 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:48:20.841970 kernel: Segment Routing with IPv6 Sep 12 23:48:20.841977 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:48:20.841984 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:48:20.841991 kernel: Key type dns_resolver registered Sep 12 23:48:20.841998 kernel: registered taskstats version 1 Sep 12 23:48:20.842005 kernel: Loading compiled-in X.509 certificates Sep 12 23:48:20.842013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 4d2b25dbd7cb4cb70d9284570c2ea7dd89d62e99' Sep 12 23:48:20.842020 kernel: Demotion targets for Node 0: null Sep 12 23:48:20.842027 kernel: Key type .fscrypt registered Sep 12 23:48:20.842033 kernel: Key type fscrypt-provisioning registered Sep 12 23:48:20.842040 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:48:20.842047 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:48:20.842054 kernel: ima: No architecture policies found Sep 12 23:48:20.842061 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:48:20.842069 kernel: clk: Disabling unused clocks Sep 12 23:48:20.842076 kernel: PM: genpd: Disabling unused power domains Sep 12 23:48:20.842083 kernel: Warning: unable to open an initial console. Sep 12 23:48:20.842090 kernel: Freeing unused kernel memory: 38976K Sep 12 23:48:20.842097 kernel: Run /init as init process Sep 12 23:48:20.842104 kernel: with arguments: Sep 12 23:48:20.842111 kernel: /init Sep 12 23:48:20.842117 kernel: with environment: Sep 12 23:48:20.842124 kernel: HOME=/ Sep 12 23:48:20.842132 kernel: TERM=linux Sep 12 23:48:20.842139 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:48:20.842154 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:48:20.842183 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:48:20.842194 systemd[1]: Detected virtualization kvm. Sep 12 23:48:20.842202 systemd[1]: Detected architecture arm64. Sep 12 23:48:20.842209 systemd[1]: Running in initrd. Sep 12 23:48:20.842216 systemd[1]: No hostname configured, using default hostname. Sep 12 23:48:20.842227 systemd[1]: Hostname set to . Sep 12 23:48:20.842234 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:48:20.842242 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:48:20.842249 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:20.842257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:20.842265 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:48:20.842272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:48:20.842280 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:48:20.842289 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:48:20.842298 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:48:20.842306 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:48:20.842313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:20.842321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:20.842328 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:48:20.842336 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:48:20.842344 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:48:20.842352 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:48:20.842359 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:48:20.842367 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:48:20.842374 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:48:20.842382 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:48:20.842389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:20.842397 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:20.842406 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:20.842413 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:48:20.842421 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:48:20.842429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:48:20.842436 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:48:20.842444 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:48:20.842452 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:48:20.842459 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:48:20.842467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:48:20.842475 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:20.842483 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:48:20.842491 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:20.842498 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:48:20.842507 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:48:20.842515 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:20.842522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:48:20.842549 systemd-journald[244]: Collecting audit messages is disabled. Sep 12 23:48:20.842571 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:48:20.842580 systemd-journald[244]: Journal started Sep 12 23:48:20.842598 systemd-journald[244]: Runtime Journal (/run/log/journal/494fc4284d2c43c9b35135aab457221a) is 6M, max 48.5M, 42.4M free. Sep 12 23:48:20.831374 systemd-modules-load[246]: Inserted module 'overlay' Sep 12 23:48:20.844643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:48:20.847195 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:48:20.847227 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:48:20.850197 kernel: Bridge firewalling registered Sep 12 23:48:20.850236 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 12 23:48:20.851546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:20.853341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:20.855401 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:48:20.858394 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:48:20.861173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:48:20.874284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:48:20.878135 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:48:20.880829 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:20.884510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:20.887277 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:48:20.892824 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:48:20.925218 systemd-resolved[298]: Positive Trust Anchors: Sep 12 23:48:20.925234 systemd-resolved[298]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:48:20.925266 systemd-resolved[298]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:48:20.929912 systemd-resolved[298]: Defaulting to hostname 'linux'. Sep 12 23:48:20.930991 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:48:20.933020 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:20.964197 kernel: SCSI subsystem initialized Sep 12 23:48:20.968175 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:48:20.976208 kernel: iscsi: registered transport (tcp) Sep 12 23:48:20.988353 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:48:20.988380 kernel: QLogic iSCSI HBA Driver Sep 12 23:48:21.007962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:48:21.029092 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:21.031564 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:48:21.082226 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:48:21.085260 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:48:21.149199 kernel: raid6: neonx8 gen() 15700 MB/s Sep 12 23:48:21.166186 kernel: raid6: neonx4 gen() 15703 MB/s Sep 12 23:48:21.183181 kernel: raid6: neonx2 gen() 13035 MB/s Sep 12 23:48:21.200215 kernel: raid6: neonx1 gen() 10321 MB/s Sep 12 23:48:21.217177 kernel: raid6: int64x8 gen() 6588 MB/s Sep 12 23:48:21.234181 kernel: raid6: int64x4 gen() 7127 MB/s Sep 12 23:48:21.251183 kernel: raid6: int64x2 gen() 5816 MB/s Sep 12 23:48:21.268405 kernel: raid6: int64x1 gen() 4949 MB/s Sep 12 23:48:21.268432 kernel: raid6: using algorithm neonx4 gen() 15703 MB/s Sep 12 23:48:21.286317 kernel: raid6: .... xor() 12356 MB/s, rmw enabled Sep 12 23:48:21.286343 kernel: raid6: using neon recovery algorithm Sep 12 23:48:21.291181 kernel: xor: measuring software checksum speed Sep 12 23:48:21.292321 kernel: 8regs : 19097 MB/sec Sep 12 23:48:21.292336 kernel: 32regs : 21704 MB/sec Sep 12 23:48:21.293743 kernel: arm64_neon : 28080 MB/sec Sep 12 23:48:21.293761 kernel: xor: using function: arm64_neon (28080 MB/sec) Sep 12 23:48:21.346195 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:48:21.352486 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:48:21.354755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:21.388528 systemd-udevd[501]: Using default interface naming scheme 'v255'. Sep 12 23:48:21.392551 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:21.394225 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:48:21.414100 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 12 23:48:21.434710 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:48:21.436753 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:48:21.485056 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:21.488776 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:48:21.537243 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 23:48:21.544213 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:48:21.551302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:48:21.551338 kernel: GPT:9289727 != 19775487 Sep 12 23:48:21.551349 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:48:21.552398 kernel: GPT:9289727 != 19775487 Sep 12 23:48:21.552413 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:48:21.554191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:21.554017 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:48:21.554230 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:21.555438 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:21.566443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:21.589104 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:48:21.595447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:21.604251 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:48:21.605375 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:48:21.618054 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:48:21.619104 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:48:21.627601 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:48:21.628584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:48:21.630081 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:21.631904 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:48:21.634180 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:48:21.635630 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:48:21.652802 disk-uuid[592]: Primary Header is updated. Sep 12 23:48:21.652802 disk-uuid[592]: Secondary Entries is updated. Sep 12 23:48:21.652802 disk-uuid[592]: Secondary Header is updated. Sep 12 23:48:21.656294 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:48:21.657967 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:21.662179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:22.663121 disk-uuid[596]: The operation has completed successfully. Sep 12 23:48:22.664251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:22.684990 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:48:22.685086 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:48:22.712898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:48:22.738971 sh[611]: Success Sep 12 23:48:22.752090 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:48:22.752128 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:48:22.752139 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:48:22.759189 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 23:48:22.782873 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:48:22.785361 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:48:22.801063 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:48:22.809083 kernel: BTRFS: device fsid 103b8b46-5d84-49b9-83b1-52780b53e7b3 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (623) Sep 12 23:48:22.809114 kernel: BTRFS info (device dm-0): first mount of filesystem 103b8b46-5d84-49b9-83b1-52780b53e7b3 Sep 12 23:48:22.809125 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:22.813782 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:48:22.813809 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:48:22.814806 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:48:22.815841 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:48:22.816981 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:48:22.817654 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:48:22.819007 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:48:22.842841 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 12 23:48:22.842881 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:22.844153 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:22.846665 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:22.846695 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:22.852218 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:22.851635 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:48:22.853254 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:48:22.916203 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:48:22.918887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:48:22.952085 systemd-networkd[796]: lo: Link UP Sep 12 23:48:22.952100 systemd-networkd[796]: lo: Gained carrier Sep 12 23:48:22.952842 systemd-networkd[796]: Enumeration completed Sep 12 23:48:22.952997 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:48:22.953301 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:22.953305 systemd-networkd[796]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:48:22.954242 systemd-networkd[796]: eth0: Link UP Sep 12 23:48:22.954329 systemd-networkd[796]: eth0: Gained carrier Sep 12 23:48:22.959463 ignition[698]: Ignition 2.21.0 Sep 12 23:48:22.954337 systemd-networkd[796]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:22.959469 ignition[698]: Stage: fetch-offline Sep 12 23:48:22.954835 systemd[1]: Reached target network.target - Network. Sep 12 23:48:22.959500 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:22.959507 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:22.959659 ignition[698]: parsed url from cmdline: "" Sep 12 23:48:22.959662 ignition[698]: no config URL provided Sep 12 23:48:22.959666 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:48:22.959672 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:48:22.959690 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 12 23:48:22.959694 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:48:22.966448 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 12 23:48:22.980218 systemd-networkd[796]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:48:23.009982 ignition[698]: parsing config with SHA512: 43ecfb36a9908413e5ced85dc7580be036a229915f4534c8a91ad61e47f97b9d388be3b931c1cc6e396aca544d3131baef39954c2cbf7db2001b327ac7f2d403 Sep 12 23:48:23.016493 unknown[698]: fetched base config from "system" Sep 12 23:48:23.016504 unknown[698]: fetched user config from "qemu" Sep 12 23:48:23.016856 ignition[698]: fetch-offline: fetch-offline passed Sep 12 23:48:23.016914 ignition[698]: Ignition finished successfully Sep 12 23:48:23.018681 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:48:23.020041 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:48:23.020799 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:48:23.049879 ignition[810]: Ignition 2.21.0 Sep 12 23:48:23.049895 ignition[810]: Stage: kargs Sep 12 23:48:23.050034 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.050043 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.053109 ignition[810]: kargs: kargs passed Sep 12 23:48:23.053194 ignition[810]: Ignition finished successfully Sep 12 23:48:23.055525 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:48:23.057203 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:48:23.085379 ignition[818]: Ignition 2.21.0 Sep 12 23:48:23.085395 ignition[818]: Stage: disks Sep 12 23:48:23.085519 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.085527 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.086696 ignition[818]: disks: disks passed Sep 12 23:48:23.088424 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:48:23.086753 ignition[818]: Ignition finished successfully Sep 12 23:48:23.089734 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:48:23.091254 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:48:23.092751 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:48:23.094166 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:48:23.095797 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:48:23.097948 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:48:23.117497 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:48:23.121390 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:48:23.123260 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:48:23.178200 kernel: EXT4-fs (vda9): mounted filesystem 01c463ed-b282-4a97-bc2e-d1c81f25bb05 r/w with ordered data mode. Quota mode: none. Sep 12 23:48:23.178435 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:48:23.179480 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:48:23.181548 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:48:23.182940 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:48:23.183793 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:48:23.183838 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:48:23.183860 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:48:23.194519 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:48:23.196823 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:48:23.201834 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 12 23:48:23.201874 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:23.201887 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:23.203734 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:23.203768 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:23.204776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:48:23.232614 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:48:23.236395 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:48:23.239424 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:48:23.243010 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:48:23.307604 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:48:23.309435 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:48:23.310807 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:48:23.324181 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:23.338305 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:48:23.349865 ignition[951]: INFO : Ignition 2.21.0 Sep 12 23:48:23.349865 ignition[951]: INFO : Stage: mount Sep 12 23:48:23.352038 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.352038 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.352038 ignition[951]: INFO : mount: mount passed Sep 12 23:48:23.352038 ignition[951]: INFO : Ignition finished successfully Sep 12 23:48:23.353717 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:48:23.355835 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:48:23.815664 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:48:23.817107 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:48:23.842812 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 12 23:48:23.842847 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:23.842858 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:23.846311 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:23.846333 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:23.847595 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:48:23.874394 ignition[979]: INFO : Ignition 2.21.0 Sep 12 23:48:23.874394 ignition[979]: INFO : Stage: files Sep 12 23:48:23.876504 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.876504 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.878045 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:48:23.878909 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:48:23.878909 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:48:23.881594 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:48:23.882592 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:48:23.882592 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:48:23.882052 unknown[979]: wrote ssh authorized keys file for user: core Sep 12 23:48:23.885480 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:48:23.885480 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 12 23:48:23.928600 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:48:24.269450 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 12 23:48:24.269450 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:48:24.272571 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:48:24.281717 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 12 23:48:24.464286 systemd-networkd[796]: eth0: Gained IPv6LL Sep 12 23:48:24.618507 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:48:25.127968 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 12 23:48:25.127968 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 23:48:25.131327 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:48:25.144951 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:48:25.147973 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:48:25.150222 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:48:25.150222 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:48:25.150222 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:48:25.150222 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:48:25.150222 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:48:25.150222 ignition[979]: INFO : files: files passed Sep 12 23:48:25.150222 ignition[979]: INFO : Ignition finished successfully Sep 12 23:48:25.150865 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:48:25.153194 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:48:25.154635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:48:25.168354 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:48:25.169356 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:48:25.170365 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:48:25.172701 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:25.173979 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:25.175442 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:25.176378 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:48:25.177574 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:48:25.179871 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:48:25.210214 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:48:25.210323 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:48:25.212099 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:48:25.213574 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:48:25.214963 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:48:25.215632 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:48:25.252010 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:48:25.254070 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:48:25.272366 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:25.273299 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:25.274863 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:48:25.276228 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:48:25.276336 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:48:25.278295 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:48:25.279878 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:48:25.281114 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:48:25.282445 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:48:25.283942 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:48:25.285588 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:48:25.286978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:48:25.288418 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:48:25.289914 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:48:25.291525 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:48:25.292905 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:48:25.294054 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:48:25.294179 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:48:25.295925 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:25.297451 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:25.299034 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:48:25.299140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:25.300771 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:48:25.300870 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:48:25.303067 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:48:25.303193 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:48:25.304696 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:48:25.305986 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:48:25.309247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:25.310216 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:48:25.312006 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:48:25.313219 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:48:25.313296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:48:25.314539 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:48:25.314610 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:48:25.315834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:48:25.315936 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:48:25.317246 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:48:25.317338 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:48:25.321446 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:48:25.322248 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:48:25.322378 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:25.324496 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:48:25.325707 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:48:25.325820 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:25.327323 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:48:25.327414 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:48:25.331824 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:48:25.346301 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:48:25.358143 ignition[1035]: INFO : Ignition 2.21.0 Sep 12 23:48:25.358143 ignition[1035]: INFO : Stage: umount Sep 12 23:48:25.359747 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:25.359747 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:25.362305 ignition[1035]: INFO : umount: umount passed Sep 12 23:48:25.362305 ignition[1035]: INFO : Ignition finished successfully Sep 12 23:48:25.362730 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:48:25.362818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:48:25.364041 systemd[1]: Stopped target network.target - Network. Sep 12 23:48:25.364861 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:48:25.364925 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:48:25.366329 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:48:25.366370 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:48:25.367668 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:48:25.367711 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:48:25.368927 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:48:25.368961 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:48:25.370617 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:48:25.371949 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:48:25.374248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:48:25.377379 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:48:25.377500 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:48:25.378877 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:48:25.378924 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:48:25.381426 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:48:25.381527 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:48:25.384888 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:48:25.385148 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:48:25.385205 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:25.388621 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:48:25.388846 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:48:25.388975 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:48:25.391822 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:48:25.392200 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:48:25.393422 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:48:25.393460 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:25.395904 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:48:25.397236 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:48:25.397290 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:48:25.398955 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:48:25.398994 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:25.401018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:48:25.401057 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:25.402806 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:25.405906 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:48:25.419775 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:48:25.419948 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:25.421793 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:48:25.421830 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:25.423631 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:48:25.423660 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:25.425078 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:48:25.425123 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:48:25.427485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:48:25.427529 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:48:25.429566 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:48:25.429612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:48:25.434189 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:48:25.434950 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:48:25.435009 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:25.438256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:48:25.438298 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:25.440396 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:48:25.440436 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:25.443334 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:48:25.448511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:48:25.452791 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:48:25.452889 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:48:25.454589 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:48:25.456505 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:48:25.475388 systemd[1]: Switching root. Sep 12 23:48:25.505271 systemd-journald[244]: Journal stopped Sep 12 23:48:26.262428 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 12 23:48:26.262486 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:48:26.262506 kernel: SELinux: policy capability open_perms=1 Sep 12 23:48:26.262516 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:48:26.262525 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:48:26.262534 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:48:26.262558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:48:26.262567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:48:26.262578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:48:26.262590 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:48:26.262600 kernel: audit: type=1403 audit(1757720905.665:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:48:26.262627 systemd[1]: Successfully loaded SELinux policy in 49.075ms. Sep 12 23:48:26.262646 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.297ms. Sep 12 23:48:26.262658 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:48:26.262670 systemd[1]: Detected virtualization kvm. Sep 12 23:48:26.262680 systemd[1]: Detected architecture arm64. Sep 12 23:48:26.262691 systemd[1]: Detected first boot. Sep 12 23:48:26.262705 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:48:26.262716 zram_generator::config[1082]: No configuration found. Sep 12 23:48:26.262727 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:48:26.262736 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:48:26.262748 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:48:26.262759 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:48:26.262770 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:48:26.262780 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:48:26.262790 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:48:26.262801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:48:26.262811 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:48:26.262821 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:48:26.262831 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:48:26.262842 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:48:26.262852 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:48:26.262862 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:48:26.262872 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:26.262882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:26.262897 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:48:26.262927 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:48:26.262937 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:48:26.262947 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:48:26.262958 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:48:26.262968 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:26.262978 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:26.262989 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:48:26.263000 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:48:26.263010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:48:26.263021 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:48:26.263031 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:26.263040 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:48:26.263050 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:48:26.263060 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:48:26.263071 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:48:26.263081 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:48:26.263093 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:48:26.263103 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:26.263113 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:26.263124 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:26.263141 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:48:26.263152 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:48:26.263226 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:48:26.263251 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:48:26.263262 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:48:26.263279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:48:26.263289 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:48:26.263300 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:48:26.263311 systemd[1]: Reached target machines.target - Containers. Sep 12 23:48:26.263321 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:48:26.263331 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:26.263341 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:48:26.263351 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:48:26.263362 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:26.263372 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:48:26.263383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:26.263393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:48:26.263404 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:26.263421 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:48:26.263431 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:48:26.263443 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:48:26.263454 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:48:26.263466 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:48:26.263477 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:26.263487 kernel: fuse: init (API version 7.41) Sep 12 23:48:26.263496 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:48:26.263506 kernel: loop: module loaded Sep 12 23:48:26.263515 kernel: ACPI: bus type drm_connector registered Sep 12 23:48:26.263525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:48:26.263535 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:48:26.263547 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:48:26.263557 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:48:26.263566 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:48:26.263577 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:48:26.263587 systemd[1]: Stopped verity-setup.service. Sep 12 23:48:26.263620 systemd-journald[1154]: Collecting audit messages is disabled. Sep 12 23:48:26.263644 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:48:26.263654 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:48:26.263665 systemd-journald[1154]: Journal started Sep 12 23:48:26.263684 systemd-journald[1154]: Runtime Journal (/run/log/journal/494fc4284d2c43c9b35135aab457221a) is 6M, max 48.5M, 42.4M free. Sep 12 23:48:26.030537 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:48:26.054188 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:48:26.054602 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:48:26.267645 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:48:26.268985 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:48:26.269998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:48:26.271071 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:48:26.272235 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:48:26.273382 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:48:26.274596 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:26.275941 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:48:26.276103 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:48:26.277432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:26.277590 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:26.278869 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:48:26.279029 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:48:26.280388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:26.280533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:26.281868 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:48:26.282023 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:48:26.283508 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:26.283656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:26.284978 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:26.286474 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:26.287883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:48:26.289335 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:48:26.301953 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:48:26.304538 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:48:26.306652 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:48:26.307709 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:48:26.307752 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:48:26.309646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:48:26.313947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:48:26.315058 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:26.316106 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:48:26.318117 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:48:26.319219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:48:26.320342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:48:26.321464 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:48:26.323384 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:48:26.326481 systemd-journald[1154]: Time spent on flushing to /var/log/journal/494fc4284d2c43c9b35135aab457221a is 19.509ms for 884 entries. Sep 12 23:48:26.326481 systemd-journald[1154]: System Journal (/var/log/journal/494fc4284d2c43c9b35135aab457221a) is 8M, max 195.6M, 187.6M free. Sep 12 23:48:26.362660 systemd-journald[1154]: Received client request to flush runtime journal. Sep 12 23:48:26.363294 kernel: loop0: detected capacity change from 0 to 138376 Sep 12 23:48:26.363320 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:48:26.326377 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:48:26.329493 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:48:26.333489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:26.335469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:48:26.337522 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:48:26.341203 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:48:26.345493 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:48:26.352290 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:48:26.367981 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:26.372199 kernel: loop1: detected capacity change from 0 to 107312 Sep 12 23:48:26.373465 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:48:26.375363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:48:26.380289 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:48:26.394859 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:48:26.402182 kernel: loop2: detected capacity change from 0 to 203944 Sep 12 23:48:26.403791 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 12 23:48:26.403810 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 12 23:48:26.409272 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:26.427196 kernel: loop3: detected capacity change from 0 to 138376 Sep 12 23:48:26.436181 kernel: loop4: detected capacity change from 0 to 107312 Sep 12 23:48:26.441178 kernel: loop5: detected capacity change from 0 to 203944 Sep 12 23:48:26.444936 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:48:26.445349 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 12 23:48:26.450256 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:48:26.450279 systemd[1]: Reloading... Sep 12 23:48:26.497190 zram_generator::config[1246]: No configuration found. Sep 12 23:48:26.581617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:26.598509 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:48:26.646961 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:48:26.647301 systemd[1]: Reloading finished in 196 ms. Sep 12 23:48:26.679722 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:48:26.680979 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:48:26.708549 systemd[1]: Starting ensure-sysext.service... Sep 12 23:48:26.710173 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:48:26.718328 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:48:26.718345 systemd[1]: Reloading... Sep 12 23:48:26.725460 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:48:26.725487 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:48:26.725688 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:48:26.725860 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:48:26.726464 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:48:26.726653 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 12 23:48:26.726692 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 12 23:48:26.729570 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:48:26.729665 systemd-tmpfiles[1282]: Skipping /boot Sep 12 23:48:26.738068 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:48:26.738223 systemd-tmpfiles[1282]: Skipping /boot Sep 12 23:48:26.769402 zram_generator::config[1310]: No configuration found. Sep 12 23:48:26.834079 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:26.895762 systemd[1]: Reloading finished in 177 ms. Sep 12 23:48:26.915662 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:48:26.920888 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:26.937249 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:26.939429 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:48:26.941392 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:48:26.945292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:48:26.949420 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:26.953328 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:48:26.958623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:26.964606 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:26.966824 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:26.969127 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:26.970128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:26.970264 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:26.972120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:48:26.975554 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:48:26.977290 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:26.977465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:26.978948 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:26.979116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:26.985019 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:26.985374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:26.990878 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:48:26.993721 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:26.995120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:26.997294 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 12 23:48:26.997589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:27.004931 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:27.006062 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:27.006195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:27.008102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:48:27.009288 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:48:27.010492 augenrules[1382]: No rules Sep 12 23:48:27.010716 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:48:27.012348 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:27.012489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:27.014615 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:48:27.015910 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:27.016082 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:27.017653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:27.017845 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:27.019428 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:27.019709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:27.021336 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:48:27.026770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:27.032935 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:27.034873 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:27.037671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:27.040460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:48:27.046531 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:27.050423 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:27.052334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:27.052468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:27.054228 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:48:27.055102 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:48:27.060070 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:27.065527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:27.067660 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:48:27.067824 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:48:27.069744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:27.069947 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:27.073210 augenrules[1404]: /sbin/augenrules: No change Sep 12 23:48:27.073375 systemd[1]: Finished ensure-sysext.service. Sep 12 23:48:27.075838 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:27.076431 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:27.081625 augenrules[1452]: No rules Sep 12 23:48:27.084031 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:27.084735 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:27.092665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:48:27.092724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:48:27.097142 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:48:27.123804 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:48:27.170223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:48:27.172583 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:48:27.192966 systemd-resolved[1348]: Positive Trust Anchors: Sep 12 23:48:27.192984 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:48:27.193402 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:48:27.199368 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:48:27.200310 systemd-resolved[1348]: Defaulting to hostname 'linux'. Sep 12 23:48:27.203236 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:48:27.204235 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:27.214455 systemd-networkd[1431]: lo: Link UP Sep 12 23:48:27.214729 systemd-networkd[1431]: lo: Gained carrier Sep 12 23:48:27.215620 systemd-networkd[1431]: Enumeration completed Sep 12 23:48:27.215703 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:48:27.216248 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:27.216314 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:48:27.216961 systemd-networkd[1431]: eth0: Link UP Sep 12 23:48:27.216976 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:48:27.217281 systemd-networkd[1431]: eth0: Gained carrier Sep 12 23:48:27.217419 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:27.218306 systemd[1]: Reached target network.target - Network. Sep 12 23:48:27.219051 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:48:27.220227 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:48:27.221421 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:48:27.222368 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:48:27.223353 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:48:27.223384 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:48:27.224091 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:48:27.225105 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:48:27.226206 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:48:27.227470 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:48:27.229221 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:48:27.231484 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:48:27.234272 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.100/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:48:27.234483 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:48:27.235622 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:48:27.236667 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:48:27.236917 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Sep 12 23:48:27.238029 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:48:27.238079 systemd-timesyncd[1460]: Initial clock synchronization to Fri 2025-09-12 23:48:27.312218 UTC. Sep 12 23:48:27.240122 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:48:27.241404 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:48:27.244388 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:48:27.248364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:48:27.249858 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:48:27.251009 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:48:27.252185 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:48:27.252923 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:48:27.252950 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:48:27.265385 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:48:27.268190 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:48:27.273470 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:48:27.275341 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:48:27.277017 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:48:27.277909 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:48:27.282318 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:48:27.284347 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:48:27.287233 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:48:27.290439 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:48:27.291519 jq[1493]: false Sep 12 23:48:27.293376 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:48:27.295008 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:48:27.295435 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:48:27.296093 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:48:27.297631 extend-filesystems[1494]: Found /dev/vda6 Sep 12 23:48:27.302385 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:48:27.304316 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:48:27.305739 extend-filesystems[1494]: Found /dev/vda9 Sep 12 23:48:27.306849 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:48:27.308236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:48:27.308412 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:48:27.309407 extend-filesystems[1494]: Checking size of /dev/vda9 Sep 12 23:48:27.309568 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:48:27.309735 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:48:27.317552 jq[1506]: true Sep 12 23:48:27.322270 extend-filesystems[1494]: Resized partition /dev/vda9 Sep 12 23:48:27.324359 extend-filesystems[1531]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 23:48:27.328211 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:48:27.332818 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:27.336786 tar[1513]: linux-arm64/helm Sep 12 23:48:27.347418 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:48:27.356817 dbus-daemon[1490]: [system] SELinux support is enabled Sep 12 23:48:27.357439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:48:27.359412 extend-filesystems[1531]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:48:27.359412 extend-filesystems[1531]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:48:27.359412 extend-filesystems[1531]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:48:27.366788 extend-filesystems[1494]: Resized filesystem in /dev/vda9 Sep 12 23:48:27.361342 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:48:27.361559 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:48:27.361562 (ntainerd)[1532]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:48:27.362794 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:48:27.363623 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:48:27.368852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:48:27.368899 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:48:27.371351 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:48:27.371379 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:48:27.377247 jq[1530]: true Sep 12 23:48:27.390735 update_engine[1504]: I20250912 23:48:27.390055 1504 main.cc:92] Flatcar Update Engine starting Sep 12 23:48:27.393630 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:48:27.395421 update_engine[1504]: I20250912 23:48:27.395375 1504 update_check_scheduler.cc:74] Next update check in 6m29s Sep 12 23:48:27.405669 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:48:27.452233 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:48:27.457215 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:48:27.458679 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:48:27.463568 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:48:27.463825 systemd-logind[1503]: New seat seat0. Sep 12 23:48:27.489619 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:48:27.496649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:27.524459 locksmithd[1543]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:48:27.548677 containerd[1532]: time="2025-09-12T23:48:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:48:27.550192 containerd[1532]: time="2025-09-12T23:48:27.549449560Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 12 23:48:27.557151 containerd[1532]: time="2025-09-12T23:48:27.557106200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.84µs" Sep 12 23:48:27.557151 containerd[1532]: time="2025-09-12T23:48:27.557143720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:48:27.557228 containerd[1532]: time="2025-09-12T23:48:27.557171520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:48:27.557340 containerd[1532]: time="2025-09-12T23:48:27.557311800Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:48:27.557340 containerd[1532]: time="2025-09-12T23:48:27.557334200Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:48:27.557397 containerd[1532]: time="2025-09-12T23:48:27.557357480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557415 containerd[1532]: time="2025-09-12T23:48:27.557404920Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557433 containerd[1532]: time="2025-09-12T23:48:27.557416960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557733 containerd[1532]: time="2025-09-12T23:48:27.557704600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557733 containerd[1532]: time="2025-09-12T23:48:27.557727920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557782 containerd[1532]: time="2025-09-12T23:48:27.557745440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:48:27.557782 containerd[1532]: time="2025-09-12T23:48:27.557757840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:48:27.558503 containerd[1532]: time="2025-09-12T23:48:27.558472360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:48:27.558902 containerd[1532]: time="2025-09-12T23:48:27.558707960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:48:27.558902 containerd[1532]: time="2025-09-12T23:48:27.558754040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:48:27.558902 containerd[1532]: time="2025-09-12T23:48:27.558768880Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:48:27.558902 containerd[1532]: time="2025-09-12T23:48:27.558800320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:48:27.559237 containerd[1532]: time="2025-09-12T23:48:27.559206640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:48:27.559669 containerd[1532]: time="2025-09-12T23:48:27.559648240Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:48:27.562916 containerd[1532]: time="2025-09-12T23:48:27.562891240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:48:27.563024 containerd[1532]: time="2025-09-12T23:48:27.563008920Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:48:27.563177 containerd[1532]: time="2025-09-12T23:48:27.563109560Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:48:27.563177 containerd[1532]: time="2025-09-12T23:48:27.563137200Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:48:27.563177 containerd[1532]: time="2025-09-12T23:48:27.563152120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:48:27.563278 containerd[1532]: time="2025-09-12T23:48:27.563262120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:48:27.563327 containerd[1532]: time="2025-09-12T23:48:27.563316280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:48:27.563375 containerd[1532]: time="2025-09-12T23:48:27.563364640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:48:27.563421 containerd[1532]: time="2025-09-12T23:48:27.563411120Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:48:27.563468 containerd[1532]: time="2025-09-12T23:48:27.563457480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:48:27.563512 containerd[1532]: time="2025-09-12T23:48:27.563501840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:48:27.563574 containerd[1532]: time="2025-09-12T23:48:27.563561600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:48:27.563739 containerd[1532]: time="2025-09-12T23:48:27.563719400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:48:27.563810 containerd[1532]: time="2025-09-12T23:48:27.563795720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:48:27.563863 containerd[1532]: time="2025-09-12T23:48:27.563851600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:48:27.563910 containerd[1532]: time="2025-09-12T23:48:27.563898760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:48:27.563963 containerd[1532]: time="2025-09-12T23:48:27.563951320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:48:27.564022 containerd[1532]: time="2025-09-12T23:48:27.564010080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:48:27.564084 containerd[1532]: time="2025-09-12T23:48:27.564072080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:48:27.564152 containerd[1532]: time="2025-09-12T23:48:27.564136920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:48:27.564218 containerd[1532]: time="2025-09-12T23:48:27.564205480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:48:27.564272 containerd[1532]: time="2025-09-12T23:48:27.564260000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:48:27.564338 containerd[1532]: time="2025-09-12T23:48:27.564324880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:48:27.564565 containerd[1532]: time="2025-09-12T23:48:27.564549640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:48:27.564621 containerd[1532]: time="2025-09-12T23:48:27.564608520Z" level=info msg="Start snapshots syncer" Sep 12 23:48:27.564704 containerd[1532]: time="2025-09-12T23:48:27.564689080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:48:27.565181 containerd[1532]: time="2025-09-12T23:48:27.565023800Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:48:27.565181 containerd[1532]: time="2025-09-12T23:48:27.565086000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:48:27.565389 containerd[1532]: time="2025-09-12T23:48:27.565369240Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:48:27.565558 containerd[1532]: time="2025-09-12T23:48:27.565539400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:48:27.565633 containerd[1532]: time="2025-09-12T23:48:27.565619440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:48:27.565684 containerd[1532]: time="2025-09-12T23:48:27.565671480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:48:27.565732 containerd[1532]: time="2025-09-12T23:48:27.565721360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:48:27.565781 containerd[1532]: time="2025-09-12T23:48:27.565769000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:48:27.565845 containerd[1532]: time="2025-09-12T23:48:27.565832720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:48:27.565898 containerd[1532]: time="2025-09-12T23:48:27.565886760Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:48:27.565963 containerd[1532]: time="2025-09-12T23:48:27.565950360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:48:27.566013 containerd[1532]: time="2025-09-12T23:48:27.566002440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:48:27.566063 containerd[1532]: time="2025-09-12T23:48:27.566050760Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:48:27.566204 containerd[1532]: time="2025-09-12T23:48:27.566186360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566257920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566271680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566280720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566288760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566299160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566309600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566390680Z" level=info msg="runtime interface created" Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566396920Z" level=info msg="created NRI interface" Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566404440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566415840Z" level=info msg="Connect containerd service" Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.566449040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:48:27.567213 containerd[1532]: time="2025-09-12T23:48:27.567121600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:48:27.645697 containerd[1532]: time="2025-09-12T23:48:27.645449320Z" level=info msg="Start subscribing containerd event" Sep 12 23:48:27.645697 containerd[1532]: time="2025-09-12T23:48:27.645648440Z" level=info msg="Start recovering state" Sep 12 23:48:27.645845 containerd[1532]: time="2025-09-12T23:48:27.645826920Z" level=info msg="Start event monitor" Sep 12 23:48:27.645962 containerd[1532]: time="2025-09-12T23:48:27.645850320Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:48:27.646057 containerd[1532]: time="2025-09-12T23:48:27.646043160Z" level=info msg="Start streaming server" Sep 12 23:48:27.646076 containerd[1532]: time="2025-09-12T23:48:27.646060720Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:48:27.646076 containerd[1532]: time="2025-09-12T23:48:27.646068520Z" level=info msg="runtime interface starting up..." Sep 12 23:48:27.646076 containerd[1532]: time="2025-09-12T23:48:27.646073920Z" level=info msg="starting plugins..." Sep 12 23:48:27.646158 containerd[1532]: time="2025-09-12T23:48:27.646091120Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:48:27.646359 containerd[1532]: time="2025-09-12T23:48:27.646140720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:48:27.646384 containerd[1532]: time="2025-09-12T23:48:27.646378400Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:48:27.646722 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:48:27.647711 containerd[1532]: time="2025-09-12T23:48:27.647682680Z" level=info msg="containerd successfully booted in 0.099637s" Sep 12 23:48:27.745270 tar[1513]: linux-arm64/LICENSE Sep 12 23:48:27.745422 tar[1513]: linux-arm64/README.md Sep 12 23:48:27.763266 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:48:28.139572 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:48:28.158643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:48:28.161150 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:48:28.181935 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:48:28.182152 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:48:28.184737 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:48:28.204382 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:48:28.206996 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:48:28.209070 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:48:28.210331 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:48:28.368552 systemd-networkd[1431]: eth0: Gained IPv6LL Sep 12 23:48:28.374795 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:48:28.376337 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:48:28.378487 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:48:28.380603 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:28.382478 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:48:28.413417 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:48:28.415272 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:48:28.417240 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:48:28.419043 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:48:28.947719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:28.949189 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:48:28.951566 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:48:28.953535 systemd[1]: Startup finished in 2.073s (kernel) + 5.062s (initrd) + 3.338s (userspace) = 10.473s. Sep 12 23:48:29.298866 kubelet[1632]: E0912 23:48:29.298754 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:48:29.300980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:48:29.301112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:48:29.301449 systemd[1]: kubelet.service: Consumed 761ms CPU time, 256.3M memory peak. Sep 12 23:48:33.108658 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:48:33.109876 systemd[1]: Started sshd@0-10.0.0.100:22-10.0.0.1:34298.service - OpenSSH per-connection server daemon (10.0.0.1:34298). Sep 12 23:48:33.204230 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 34298 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.205679 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.218070 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:48:33.219933 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:48:33.223203 systemd-logind[1503]: New session 1 of user core. Sep 12 23:48:33.240407 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:48:33.245139 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:48:33.275211 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:48:33.277535 systemd-logind[1503]: New session c1 of user core. Sep 12 23:48:33.394780 systemd[1649]: Queued start job for default target default.target. Sep 12 23:48:33.411104 systemd[1649]: Created slice app.slice - User Application Slice. Sep 12 23:48:33.411134 systemd[1649]: Reached target paths.target - Paths. Sep 12 23:48:33.411193 systemd[1649]: Reached target timers.target - Timers. Sep 12 23:48:33.412372 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:48:33.421658 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:48:33.421718 systemd[1649]: Reached target sockets.target - Sockets. Sep 12 23:48:33.421752 systemd[1649]: Reached target basic.target - Basic System. Sep 12 23:48:33.421783 systemd[1649]: Reached target default.target - Main User Target. Sep 12 23:48:33.421816 systemd[1649]: Startup finished in 138ms. Sep 12 23:48:33.421990 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:48:33.437397 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:48:33.509099 systemd[1]: Started sshd@1-10.0.0.100:22-10.0.0.1:34308.service - OpenSSH per-connection server daemon (10.0.0.1:34308). Sep 12 23:48:33.566883 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 34308 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.568106 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.572626 systemd-logind[1503]: New session 2 of user core. Sep 12 23:48:33.583322 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:48:33.635886 sshd[1662]: Connection closed by 10.0.0.1 port 34308 Sep 12 23:48:33.636337 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:33.650098 systemd[1]: sshd@1-10.0.0.100:22-10.0.0.1:34308.service: Deactivated successfully. Sep 12 23:48:33.651852 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:48:33.653621 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:48:33.656498 systemd[1]: Started sshd@2-10.0.0.100:22-10.0.0.1:34312.service - OpenSSH per-connection server daemon (10.0.0.1:34312). Sep 12 23:48:33.657090 systemd-logind[1503]: Removed session 2. Sep 12 23:48:33.709716 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 34312 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.710966 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.715316 systemd-logind[1503]: New session 3 of user core. Sep 12 23:48:33.723364 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:48:33.770811 sshd[1670]: Connection closed by 10.0.0.1 port 34312 Sep 12 23:48:33.771113 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:33.786110 systemd[1]: sshd@2-10.0.0.100:22-10.0.0.1:34312.service: Deactivated successfully. Sep 12 23:48:33.788642 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:48:33.791230 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:48:33.792778 systemd[1]: Started sshd@3-10.0.0.100:22-10.0.0.1:34316.service - OpenSSH per-connection server daemon (10.0.0.1:34316). Sep 12 23:48:33.793629 systemd-logind[1503]: Removed session 3. Sep 12 23:48:33.842097 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 34316 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.843487 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.848383 systemd-logind[1503]: New session 4 of user core. Sep 12 23:48:33.862387 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:48:33.913598 sshd[1678]: Connection closed by 10.0.0.1 port 34316 Sep 12 23:48:33.913882 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:33.934291 systemd[1]: sshd@3-10.0.0.100:22-10.0.0.1:34316.service: Deactivated successfully. Sep 12 23:48:33.935643 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:48:33.936304 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:48:33.938182 systemd[1]: Started sshd@4-10.0.0.100:22-10.0.0.1:34328.service - OpenSSH per-connection server daemon (10.0.0.1:34328). Sep 12 23:48:33.940783 systemd-logind[1503]: Removed session 4. Sep 12 23:48:33.989420 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 34328 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.990625 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.994960 systemd-logind[1503]: New session 5 of user core. Sep 12 23:48:34.006336 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:48:34.064275 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:48:34.064527 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.078874 sudo[1687]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.080325 sshd[1686]: Connection closed by 10.0.0.1 port 34328 Sep 12 23:48:34.080856 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.098354 systemd[1]: sshd@4-10.0.0.100:22-10.0.0.1:34328.service: Deactivated successfully. Sep 12 23:48:34.099763 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:48:34.101735 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:48:34.103856 systemd[1]: Started sshd@5-10.0.0.100:22-10.0.0.1:34344.service - OpenSSH per-connection server daemon (10.0.0.1:34344). Sep 12 23:48:34.104975 systemd-logind[1503]: Removed session 5. Sep 12 23:48:34.159225 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 34344 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.160461 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.164915 systemd-logind[1503]: New session 6 of user core. Sep 12 23:48:34.172383 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:48:34.225222 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:48:34.225804 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.230585 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.235413 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:48:34.235673 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.243799 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:34.281012 augenrules[1719]: No rules Sep 12 23:48:34.282130 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:34.283282 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:34.285393 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.287208 sshd[1695]: Connection closed by 10.0.0.1 port 34344 Sep 12 23:48:34.287368 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.298032 systemd[1]: sshd@5-10.0.0.100:22-10.0.0.1:34344.service: Deactivated successfully. Sep 12 23:48:34.301371 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:48:34.301999 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:48:34.304006 systemd[1]: Started sshd@6-10.0.0.100:22-10.0.0.1:34346.service - OpenSSH per-connection server daemon (10.0.0.1:34346). Sep 12 23:48:34.304485 systemd-logind[1503]: Removed session 6. Sep 12 23:48:34.352625 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 34346 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.353822 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.357749 systemd-logind[1503]: New session 7 of user core. Sep 12 23:48:34.368408 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:48:34.417888 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:48:34.418620 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.717141 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:48:34.733507 (dockerd)[1752]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:48:34.947795 dockerd[1752]: time="2025-09-12T23:48:34.947723105Z" level=info msg="Starting up" Sep 12 23:48:34.949106 dockerd[1752]: time="2025-09-12T23:48:34.949084191Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:48:34.986475 dockerd[1752]: time="2025-09-12T23:48:34.986379067Z" level=info msg="Loading containers: start." Sep 12 23:48:34.994192 kernel: Initializing XFRM netlink socket Sep 12 23:48:35.184581 systemd-networkd[1431]: docker0: Link UP Sep 12 23:48:35.187638 dockerd[1752]: time="2025-09-12T23:48:35.187594318Z" level=info msg="Loading containers: done." Sep 12 23:48:35.200889 dockerd[1752]: time="2025-09-12T23:48:35.200832516Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:48:35.201031 dockerd[1752]: time="2025-09-12T23:48:35.200959457Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 12 23:48:35.201106 dockerd[1752]: time="2025-09-12T23:48:35.201073471Z" level=info msg="Initializing buildkit" Sep 12 23:48:35.223703 dockerd[1752]: time="2025-09-12T23:48:35.223628626Z" level=info msg="Completed buildkit initialization" Sep 12 23:48:35.228387 dockerd[1752]: time="2025-09-12T23:48:35.228347524Z" level=info msg="Daemon has completed initialization" Sep 12 23:48:35.228525 dockerd[1752]: time="2025-09-12T23:48:35.228484582Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:48:35.228601 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:48:35.773368 containerd[1532]: time="2025-09-12T23:48:35.773327248Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 23:48:36.320833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185182384.mount: Deactivated successfully. Sep 12 23:48:37.264557 containerd[1532]: time="2025-09-12T23:48:37.264497028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:37.265902 containerd[1532]: time="2025-09-12T23:48:37.265617827Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 12 23:48:37.266710 containerd[1532]: time="2025-09-12T23:48:37.266678820Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:37.269326 containerd[1532]: time="2025-09-12T23:48:37.269293376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:37.270456 containerd[1532]: time="2025-09-12T23:48:37.270274146Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.496904591s" Sep 12 23:48:37.270456 containerd[1532]: time="2025-09-12T23:48:37.270307358Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 12 23:48:37.271527 containerd[1532]: time="2025-09-12T23:48:37.271502765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 23:48:38.452595 containerd[1532]: time="2025-09-12T23:48:38.451706608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.452595 containerd[1532]: time="2025-09-12T23:48:38.452516140Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 12 23:48:38.453199 containerd[1532]: time="2025-09-12T23:48:38.453170654Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.456003 containerd[1532]: time="2025-09-12T23:48:38.455971517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.457240 containerd[1532]: time="2025-09-12T23:48:38.457212500Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.185679493s" Sep 12 23:48:38.457294 containerd[1532]: time="2025-09-12T23:48:38.457246863Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 12 23:48:38.457767 containerd[1532]: time="2025-09-12T23:48:38.457695957Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 23:48:39.488691 containerd[1532]: time="2025-09-12T23:48:39.488608439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.490091 containerd[1532]: time="2025-09-12T23:48:39.489856540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 12 23:48:39.490834 containerd[1532]: time="2025-09-12T23:48:39.490799310Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.493504 containerd[1532]: time="2025-09-12T23:48:39.493478021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.494589 containerd[1532]: time="2025-09-12T23:48:39.494549225Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 1.03679994s" Sep 12 23:48:39.494589 containerd[1532]: time="2025-09-12T23:48:39.494588669Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 12 23:48:39.495233 containerd[1532]: time="2025-09-12T23:48:39.495211557Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 23:48:39.551458 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:48:39.552850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:39.706610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:39.722489 (kubelet)[2038]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:48:39.765102 kubelet[2038]: E0912 23:48:39.764985 2038 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:48:39.768210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:48:39.768334 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:48:39.768808 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.7M memory peak. Sep 12 23:48:40.567146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423657595.mount: Deactivated successfully. Sep 12 23:48:40.970468 containerd[1532]: time="2025-09-12T23:48:40.969764864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.970468 containerd[1532]: time="2025-09-12T23:48:40.970240271Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 12 23:48:40.971388 containerd[1532]: time="2025-09-12T23:48:40.971354951Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.972951 containerd[1532]: time="2025-09-12T23:48:40.972902960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.973553 containerd[1532]: time="2025-09-12T23:48:40.973446414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.478200547s" Sep 12 23:48:40.973553 containerd[1532]: time="2025-09-12T23:48:40.973483122Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 12 23:48:40.974013 containerd[1532]: time="2025-09-12T23:48:40.973949192Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 23:48:41.503922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425517315.mount: Deactivated successfully. Sep 12 23:48:42.144754 containerd[1532]: time="2025-09-12T23:48:42.144699892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.145542 containerd[1532]: time="2025-09-12T23:48:42.145505083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 23:48:42.146839 containerd[1532]: time="2025-09-12T23:48:42.146411779Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.149452 containerd[1532]: time="2025-09-12T23:48:42.149417915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.150612 containerd[1532]: time="2025-09-12T23:48:42.150579215Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.176565587s" Sep 12 23:48:42.150726 containerd[1532]: time="2025-09-12T23:48:42.150709161Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 23:48:42.151268 containerd[1532]: time="2025-09-12T23:48:42.151247290Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:48:42.647685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223228524.mount: Deactivated successfully. Sep 12 23:48:42.652835 containerd[1532]: time="2025-09-12T23:48:42.652795771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:42.653245 containerd[1532]: time="2025-09-12T23:48:42.653217654Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 23:48:42.654193 containerd[1532]: time="2025-09-12T23:48:42.654133403Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:42.656218 containerd[1532]: time="2025-09-12T23:48:42.656180689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:42.656830 containerd[1532]: time="2025-09-12T23:48:42.656796249Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 505.505818ms" Sep 12 23:48:42.656863 containerd[1532]: time="2025-09-12T23:48:42.656830298Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:48:42.657331 containerd[1532]: time="2025-09-12T23:48:42.657288673Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 23:48:43.065294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2674327790.mount: Deactivated successfully. Sep 12 23:48:44.702415 containerd[1532]: time="2025-09-12T23:48:44.702248597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:44.706208 containerd[1532]: time="2025-09-12T23:48:44.705572636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 12 23:48:44.708384 containerd[1532]: time="2025-09-12T23:48:44.708333939Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:44.712946 containerd[1532]: time="2025-09-12T23:48:44.712891809Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:44.713761 containerd[1532]: time="2025-09-12T23:48:44.713616202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.056196185s" Sep 12 23:48:44.713761 containerd[1532]: time="2025-09-12T23:48:44.713663854Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 12 23:48:48.982630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:48.982769 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.7M memory peak. Sep 12 23:48:48.984569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:49.001989 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... Sep 12 23:48:49.002002 systemd[1]: Reloading... Sep 12 23:48:49.058189 zram_generator::config[2235]: No configuration found. Sep 12 23:48:49.145850 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:49.236518 systemd[1]: Reloading finished in 234 ms. Sep 12 23:48:49.274608 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:48:49.274670 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:48:49.274893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:49.277430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:49.414498 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:49.417998 (kubelet)[2279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:48:49.450010 kubelet[2279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:49.450010 kubelet[2279]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:48:49.450010 kubelet[2279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:49.450375 kubelet[2279]: I0912 23:48:49.450050 2279 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:48:50.208670 kubelet[2279]: I0912 23:48:50.208624 2279 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:48:50.208670 kubelet[2279]: I0912 23:48:50.208658 2279 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:48:50.208919 kubelet[2279]: I0912 23:48:50.208887 2279 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:48:50.229311 kubelet[2279]: E0912 23:48:50.229272 2279 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:50.229623 kubelet[2279]: I0912 23:48:50.229602 2279 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:48:50.237890 kubelet[2279]: I0912 23:48:50.237841 2279 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:48:50.241370 kubelet[2279]: I0912 23:48:50.241347 2279 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:48:50.242180 kubelet[2279]: I0912 23:48:50.242147 2279 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:48:50.242338 kubelet[2279]: I0912 23:48:50.242309 2279 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:48:50.242511 kubelet[2279]: I0912 23:48:50.242340 2279 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:48:50.242606 kubelet[2279]: I0912 23:48:50.242589 2279 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:48:50.242606 kubelet[2279]: I0912 23:48:50.242598 2279 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:48:50.242903 kubelet[2279]: I0912 23:48:50.242877 2279 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:50.244845 kubelet[2279]: I0912 23:48:50.244817 2279 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:48:50.244882 kubelet[2279]: I0912 23:48:50.244847 2279 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:48:50.244882 kubelet[2279]: I0912 23:48:50.244867 2279 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:48:50.244951 kubelet[2279]: I0912 23:48:50.244941 2279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:48:50.249214 kubelet[2279]: W0912 23:48:50.248774 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:50.249214 kubelet[2279]: E0912 23:48:50.248844 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:50.249214 kubelet[2279]: W0912 23:48:50.248838 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:50.249214 kubelet[2279]: E0912 23:48:50.248888 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:50.250058 kubelet[2279]: I0912 23:48:50.250042 2279 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:48:50.250889 kubelet[2279]: I0912 23:48:50.250859 2279 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:48:50.251096 kubelet[2279]: W0912 23:48:50.251084 2279 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:48:50.252911 kubelet[2279]: I0912 23:48:50.252309 2279 server.go:1274] "Started kubelet" Sep 12 23:48:50.253229 kubelet[2279]: I0912 23:48:50.253196 2279 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:48:50.253723 kubelet[2279]: I0912 23:48:50.253672 2279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:48:50.254075 kubelet[2279]: I0912 23:48:50.254055 2279 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:48:50.254244 kubelet[2279]: I0912 23:48:50.254222 2279 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:48:50.254244 kubelet[2279]: I0912 23:48:50.254239 2279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:48:50.256689 kubelet[2279]: I0912 23:48:50.256649 2279 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:48:50.258389 kubelet[2279]: E0912 23:48:50.257413 2279 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.100:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864addf8df4d2f2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:48:50.252280562 +0000 UTC m=+0.831495304,LastTimestamp:2025-09-12 23:48:50.252280562 +0000 UTC m=+0.831495304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:48:50.258389 kubelet[2279]: I0912 23:48:50.258382 2279 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:48:50.258572 kubelet[2279]: E0912 23:48:50.258553 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:50.259378 kubelet[2279]: W0912 23:48:50.259285 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:50.259378 kubelet[2279]: I0912 23:48:50.259338 2279 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:48:50.259476 kubelet[2279]: I0912 23:48:50.259393 2279 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:48:50.259476 kubelet[2279]: E0912 23:48:50.259336 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:50.259514 kubelet[2279]: E0912 23:48:50.259470 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="200ms" Sep 12 23:48:50.260015 kubelet[2279]: I0912 23:48:50.259689 2279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:48:50.260870 kubelet[2279]: I0912 23:48:50.260834 2279 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:48:50.260870 kubelet[2279]: I0912 23:48:50.260855 2279 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:48:50.260958 kubelet[2279]: E0912 23:48:50.260878 2279 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:48:50.270498 kubelet[2279]: I0912 23:48:50.270474 2279 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:48:50.270498 kubelet[2279]: I0912 23:48:50.270491 2279 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:48:50.271309 kubelet[2279]: I0912 23:48:50.270520 2279 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:50.272127 kubelet[2279]: I0912 23:48:50.272085 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:48:50.273103 kubelet[2279]: I0912 23:48:50.273073 2279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:48:50.273103 kubelet[2279]: I0912 23:48:50.273098 2279 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:48:50.273206 kubelet[2279]: I0912 23:48:50.273116 2279 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:48:50.273206 kubelet[2279]: E0912 23:48:50.273155 2279 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:48:50.274111 kubelet[2279]: W0912 23:48:50.274003 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:50.274111 kubelet[2279]: E0912 23:48:50.274065 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:50.339636 kubelet[2279]: I0912 23:48:50.339592 2279 policy_none.go:49] "None policy: Start" Sep 12 23:48:50.340407 kubelet[2279]: I0912 23:48:50.340389 2279 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:48:50.340465 kubelet[2279]: I0912 23:48:50.340417 2279 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:48:50.352522 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:48:50.359083 kubelet[2279]: E0912 23:48:50.359056 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:50.365900 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:48:50.373729 kubelet[2279]: E0912 23:48:50.373686 2279 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:48:50.375774 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:48:50.377294 kubelet[2279]: I0912 23:48:50.377217 2279 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:48:50.377431 kubelet[2279]: I0912 23:48:50.377420 2279 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:48:50.377488 kubelet[2279]: I0912 23:48:50.377432 2279 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:48:50.378079 kubelet[2279]: I0912 23:48:50.377668 2279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:48:50.378888 kubelet[2279]: E0912 23:48:50.378868 2279 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:48:50.461412 kubelet[2279]: E0912 23:48:50.460628 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="400ms" Sep 12 23:48:50.479970 kubelet[2279]: I0912 23:48:50.479921 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:48:50.480464 kubelet[2279]: E0912 23:48:50.480429 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 23:48:50.581758 systemd[1]: Created slice kubepods-burstable-pod37a063ae90824551ef38d7f0ed67be14.slice - libcontainer container kubepods-burstable-pod37a063ae90824551ef38d7f0ed67be14.slice. Sep 12 23:48:50.609593 systemd[1]: Created slice kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice - libcontainer container kubepods-burstable-pod71d8bf7bd9b7c7432927bee9d50592b5.slice. Sep 12 23:48:50.630401 systemd[1]: Created slice kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice - libcontainer container kubepods-burstable-podfe5e332fba00ba0b5b33a25fe2e8fd7b.slice. Sep 12 23:48:50.662212 kubelet[2279]: I0912 23:48:50.662180 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:48:50.662503 kubelet[2279]: I0912 23:48:50.662360 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:50.662503 kubelet[2279]: I0912 23:48:50.662397 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:50.662503 kubelet[2279]: I0912 23:48:50.662417 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:50.662503 kubelet[2279]: I0912 23:48:50.662431 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:50.662503 kubelet[2279]: I0912 23:48:50.662446 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:50.662620 kubelet[2279]: I0912 23:48:50.662487 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:50.662727 kubelet[2279]: I0912 23:48:50.662687 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:50.662727 kubelet[2279]: I0912 23:48:50.662710 2279 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:50.682619 kubelet[2279]: I0912 23:48:50.682194 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:48:50.682619 kubelet[2279]: E0912 23:48:50.682529 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 23:48:50.861512 kubelet[2279]: E0912 23:48:50.861375 2279 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.100:6443: connect: connection refused" interval="800ms" Sep 12 23:48:50.909515 containerd[1532]: time="2025-09-12T23:48:50.909466511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:37a063ae90824551ef38d7f0ed67be14,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:50.928155 containerd[1532]: time="2025-09-12T23:48:50.928106514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:50.932760 containerd[1532]: time="2025-09-12T23:48:50.932730947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:51.084048 kubelet[2279]: I0912 23:48:51.084018 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:48:51.084368 kubelet[2279]: E0912 23:48:51.084344 2279 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.100:6443/api/v1/nodes\": dial tcp 10.0.0.100:6443: connect: connection refused" node="localhost" Sep 12 23:48:51.149436 containerd[1532]: time="2025-09-12T23:48:51.149385112Z" level=info msg="connecting to shim a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c" address="unix:///run/containerd/s/9ead323bf09395972c7ef3f12fae8868b2550843139c64ed31fa4732acbdc506" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:51.171339 systemd[1]: Started cri-containerd-a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c.scope - libcontainer container a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c. Sep 12 23:48:51.216747 kubelet[2279]: W0912 23:48:51.216686 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:51.216864 kubelet[2279]: E0912 23:48:51.216749 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:51.226786 containerd[1532]: time="2025-09-12T23:48:51.226739988Z" level=info msg="connecting to shim 692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1" address="unix:///run/containerd/s/10645a262fd4f6ccbb34ed96b76a2b3a98a4f351757a8f8d8a2062466c2860c0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:51.247855 containerd[1532]: time="2025-09-12T23:48:51.247816054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:37a063ae90824551ef38d7f0ed67be14,Namespace:kube-system,Attempt:0,} returns sandbox id \"a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c\"" Sep 12 23:48:51.250471 containerd[1532]: time="2025-09-12T23:48:51.250282115Z" level=info msg="CreateContainer within sandbox \"a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:48:51.250324 systemd[1]: Started cri-containerd-692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1.scope - libcontainer container 692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1. Sep 12 23:48:51.261306 containerd[1532]: time="2025-09-12T23:48:51.261253715Z" level=info msg="connecting to shim 691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77" address="unix:///run/containerd/s/904408c3245cb133e8e12b7c554c363316a5bab3eaeda9c6f71194e763190cd9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:51.270912 containerd[1532]: time="2025-09-12T23:48:51.270867290Z" level=info msg="Container 8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:51.287463 systemd[1]: Started cri-containerd-691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77.scope - libcontainer container 691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77. Sep 12 23:48:51.296188 containerd[1532]: time="2025-09-12T23:48:51.294593016Z" level=info msg="CreateContainer within sandbox \"a929ce6b5a50376db047991dd9864914413d4b779d456f6a9000f44a8217ad4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da\"" Sep 12 23:48:51.297143 containerd[1532]: time="2025-09-12T23:48:51.297101495Z" level=info msg="StartContainer for \"8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da\"" Sep 12 23:48:51.298479 containerd[1532]: time="2025-09-12T23:48:51.298445794Z" level=info msg="connecting to shim 8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da" address="unix:///run/containerd/s/9ead323bf09395972c7ef3f12fae8868b2550843139c64ed31fa4732acbdc506" protocol=ttrpc version=3 Sep 12 23:48:51.299587 containerd[1532]: time="2025-09-12T23:48:51.299545787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1\"" Sep 12 23:48:51.302465 containerd[1532]: time="2025-09-12T23:48:51.302412380Z" level=info msg="CreateContainer within sandbox \"692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:48:51.310892 kubelet[2279]: W0912 23:48:51.310826 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:51.311018 kubelet[2279]: E0912 23:48:51.310899 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:51.319173 containerd[1532]: time="2025-09-12T23:48:51.319123729Z" level=info msg="Container 442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:51.321407 systemd[1]: Started cri-containerd-8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da.scope - libcontainer container 8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da. Sep 12 23:48:51.333580 kubelet[2279]: W0912 23:48:51.333462 2279 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.100:6443: connect: connection refused Sep 12 23:48:51.333580 kubelet[2279]: E0912 23:48:51.333541 2279 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.100:6443: connect: connection refused" logger="UnhandledError" Sep 12 23:48:51.334154 containerd[1532]: time="2025-09-12T23:48:51.334045948Z" level=info msg="CreateContainer within sandbox \"692585a9749b8adb22d229ffb4fe518c77287fc9e87c8c692042f802b9e7d3e1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d\"" Sep 12 23:48:51.334626 containerd[1532]: time="2025-09-12T23:48:51.334579697Z" level=info msg="StartContainer for \"442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d\"" Sep 12 23:48:51.335901 containerd[1532]: time="2025-09-12T23:48:51.335875895Z" level=info msg="connecting to shim 442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d" address="unix:///run/containerd/s/10645a262fd4f6ccbb34ed96b76a2b3a98a4f351757a8f8d8a2062466c2860c0" protocol=ttrpc version=3 Sep 12 23:48:51.337830 containerd[1532]: time="2025-09-12T23:48:51.337706402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77\"" Sep 12 23:48:51.340289 containerd[1532]: time="2025-09-12T23:48:51.340251177Z" level=info msg="CreateContainer within sandbox \"691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:48:51.360456 systemd[1]: Started cri-containerd-442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d.scope - libcontainer container 442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d. Sep 12 23:48:51.361486 containerd[1532]: time="2025-09-12T23:48:51.361435930Z" level=info msg="Container df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:51.377893 containerd[1532]: time="2025-09-12T23:48:51.377843948Z" level=info msg="StartContainer for \"8e699355b8e96ead7568bbea2056990bc3037a7308a68a1317516565f642e5da\" returns successfully" Sep 12 23:48:51.381220 containerd[1532]: time="2025-09-12T23:48:51.381016913Z" level=info msg="CreateContainer within sandbox \"691cc41090a5eed6b3cc24dffdc76bbd67d631f4ecc2ffe5bcae3ea3e6c6ae77\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f\"" Sep 12 23:48:51.381520 containerd[1532]: time="2025-09-12T23:48:51.381488876Z" level=info msg="StartContainer for \"df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f\"" Sep 12 23:48:51.384329 containerd[1532]: time="2025-09-12T23:48:51.384293043Z" level=info msg="connecting to shim df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f" address="unix:///run/containerd/s/904408c3245cb133e8e12b7c554c363316a5bab3eaeda9c6f71194e763190cd9" protocol=ttrpc version=3 Sep 12 23:48:51.405338 systemd[1]: Started cri-containerd-df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f.scope - libcontainer container df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f. Sep 12 23:48:51.418024 containerd[1532]: time="2025-09-12T23:48:51.417972210Z" level=info msg="StartContainer for \"442634a624ff6beb0116dc6d509e2c80e42f6247dd93f268962b53708713f58d\" returns successfully" Sep 12 23:48:51.458181 containerd[1532]: time="2025-09-12T23:48:51.457568644Z" level=info msg="StartContainer for \"df2a0263330a48f15cf74b84841a8c02c6a381f2073f1fde55498e9fd2abc94f\" returns successfully" Sep 12 23:48:51.886385 kubelet[2279]: I0912 23:48:51.886346 2279 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:48:53.006277 kubelet[2279]: E0912 23:48:53.006209 2279 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:48:53.105921 kubelet[2279]: I0912 23:48:53.105739 2279 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 23:48:53.105921 kubelet[2279]: E0912 23:48:53.105780 2279 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:48:53.120624 kubelet[2279]: E0912 23:48:53.120578 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.220781 kubelet[2279]: E0912 23:48:53.220720 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.322155 kubelet[2279]: E0912 23:48:53.321770 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.423062 kubelet[2279]: E0912 23:48:53.422896 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.523962 kubelet[2279]: E0912 23:48:53.523835 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.624181 kubelet[2279]: E0912 23:48:53.623929 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.724786 kubelet[2279]: E0912 23:48:53.724626 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.824955 kubelet[2279]: E0912 23:48:53.824886 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:53.925380 kubelet[2279]: E0912 23:48:53.925266 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.026019 kubelet[2279]: E0912 23:48:54.025980 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.126641 kubelet[2279]: E0912 23:48:54.126608 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.227559 kubelet[2279]: E0912 23:48:54.227318 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.328404 kubelet[2279]: E0912 23:48:54.328359 2279 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:55.248938 kubelet[2279]: I0912 23:48:55.248899 2279 apiserver.go:52] "Watching apiserver" Sep 12 23:48:55.259967 kubelet[2279]: I0912 23:48:55.259914 2279 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:48:55.864954 systemd[1]: Reload requested from client PID 2552 ('systemctl') (unit session-7.scope)... Sep 12 23:48:55.864968 systemd[1]: Reloading... Sep 12 23:48:55.940884 zram_generator::config[2595]: No configuration found. Sep 12 23:48:56.018207 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:56.119033 systemd[1]: Reloading finished in 253 ms. Sep 12 23:48:56.146718 kubelet[2279]: I0912 23:48:56.146610 2279 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:48:56.146772 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:56.160566 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:48:56.160777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:56.160825 systemd[1]: kubelet.service: Consumed 1.184s CPU time, 128.1M memory peak. Sep 12 23:48:56.163346 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:56.342305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:56.345633 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:48:56.392901 kubelet[2637]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:56.392901 kubelet[2637]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 23:48:56.392901 kubelet[2637]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:56.392901 kubelet[2637]: I0912 23:48:56.392879 2637 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:48:56.399459 kubelet[2637]: I0912 23:48:56.399422 2637 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 23:48:56.399459 kubelet[2637]: I0912 23:48:56.399454 2637 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:48:56.399741 kubelet[2637]: I0912 23:48:56.399728 2637 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 23:48:56.401688 kubelet[2637]: I0912 23:48:56.401580 2637 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 23:48:56.406496 kubelet[2637]: I0912 23:48:56.406208 2637 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:48:56.414724 kubelet[2637]: I0912 23:48:56.414678 2637 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:48:56.418759 kubelet[2637]: I0912 23:48:56.418715 2637 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:48:56.419051 kubelet[2637]: I0912 23:48:56.418832 2637 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 23:48:56.419051 kubelet[2637]: I0912 23:48:56.418913 2637 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:48:56.419356 kubelet[2637]: I0912 23:48:56.418938 2637 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:48:56.419478 kubelet[2637]: I0912 23:48:56.419365 2637 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:48:56.419478 kubelet[2637]: I0912 23:48:56.419474 2637 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 23:48:56.419624 kubelet[2637]: I0912 23:48:56.419607 2637 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:56.420360 kubelet[2637]: I0912 23:48:56.419822 2637 kubelet.go:408] "Attempting to sync node with API server" Sep 12 23:48:56.420360 kubelet[2637]: I0912 23:48:56.419844 2637 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:48:56.420360 kubelet[2637]: I0912 23:48:56.419865 2637 kubelet.go:314] "Adding apiserver pod source" Sep 12 23:48:56.420360 kubelet[2637]: I0912 23:48:56.419891 2637 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:48:56.421411 kubelet[2637]: I0912 23:48:56.421384 2637 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:48:56.421886 kubelet[2637]: I0912 23:48:56.421868 2637 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 23:48:56.423231 kubelet[2637]: I0912 23:48:56.422308 2637 server.go:1274] "Started kubelet" Sep 12 23:48:56.424554 kubelet[2637]: I0912 23:48:56.423838 2637 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:48:56.424554 kubelet[2637]: I0912 23:48:56.424151 2637 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:48:56.424554 kubelet[2637]: I0912 23:48:56.424239 2637 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:48:56.424937 kubelet[2637]: I0912 23:48:56.424234 2637 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:48:56.429563 kubelet[2637]: I0912 23:48:56.429526 2637 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 23:48:56.429837 kubelet[2637]: E0912 23:48:56.429815 2637 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:56.431728 kubelet[2637]: I0912 23:48:56.430288 2637 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 23:48:56.431728 kubelet[2637]: I0912 23:48:56.430512 2637 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:48:56.440923 kubelet[2637]: I0912 23:48:56.440888 2637 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:48:56.457963 kubelet[2637]: I0912 23:48:56.456402 2637 server.go:449] "Adding debug handlers to kubelet server" Sep 12 23:48:56.457963 kubelet[2637]: I0912 23:48:56.457059 2637 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:48:56.461067 kubelet[2637]: E0912 23:48:56.460959 2637 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:48:56.462951 kubelet[2637]: I0912 23:48:56.462281 2637 factory.go:221] Registration of the containerd container factory successfully Sep 12 23:48:56.463105 kubelet[2637]: I0912 23:48:56.463089 2637 factory.go:221] Registration of the systemd container factory successfully Sep 12 23:48:56.467019 kubelet[2637]: I0912 23:48:56.466854 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 23:48:56.468230 kubelet[2637]: I0912 23:48:56.468195 2637 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 23:48:56.468230 kubelet[2637]: I0912 23:48:56.468224 2637 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 23:48:56.468437 kubelet[2637]: I0912 23:48:56.468247 2637 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 23:48:56.468437 kubelet[2637]: E0912 23:48:56.468293 2637 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:48:56.508268 kubelet[2637]: I0912 23:48:56.508239 2637 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 23:48:56.508572 kubelet[2637]: I0912 23:48:56.508537 2637 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 23:48:56.508650 kubelet[2637]: I0912 23:48:56.508640 2637 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:56.509060 kubelet[2637]: I0912 23:48:56.509032 2637 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:48:56.509182 kubelet[2637]: I0912 23:48:56.509132 2637 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:48:56.509443 kubelet[2637]: I0912 23:48:56.509420 2637 policy_none.go:49] "None policy: Start" Sep 12 23:48:56.510316 kubelet[2637]: I0912 23:48:56.510297 2637 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 23:48:56.510417 kubelet[2637]: I0912 23:48:56.510406 2637 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:48:56.510662 kubelet[2637]: I0912 23:48:56.510649 2637 state_mem.go:75] "Updated machine memory state" Sep 12 23:48:56.516876 kubelet[2637]: I0912 23:48:56.516846 2637 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 23:48:56.517060 kubelet[2637]: I0912 23:48:56.517038 2637 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:48:56.517273 kubelet[2637]: I0912 23:48:56.517055 2637 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:48:56.517559 kubelet[2637]: I0912 23:48:56.517444 2637 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:48:56.618654 kubelet[2637]: I0912 23:48:56.618627 2637 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 12 23:48:56.631300 kubelet[2637]: I0912 23:48:56.631257 2637 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 12 23:48:56.631440 kubelet[2637]: I0912 23:48:56.631332 2637 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 12 23:48:56.731556 kubelet[2637]: I0912 23:48:56.731421 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:56.731556 kubelet[2637]: I0912 23:48:56.731485 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:56.731556 kubelet[2637]: I0912 23:48:56.731507 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:56.731556 kubelet[2637]: I0912 23:48:56.731524 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:56.731556 kubelet[2637]: I0912 23:48:56.731543 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:56.731753 kubelet[2637]: I0912 23:48:56.731559 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:48:56.731753 kubelet[2637]: I0912 23:48:56.731574 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:56.731753 kubelet[2637]: I0912 23:48:56.731589 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/37a063ae90824551ef38d7f0ed67be14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"37a063ae90824551ef38d7f0ed67be14\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:56.731753 kubelet[2637]: I0912 23:48:56.731610 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:57.420591 kubelet[2637]: I0912 23:48:57.420535 2637 apiserver.go:52] "Watching apiserver" Sep 12 23:48:57.430726 kubelet[2637]: I0912 23:48:57.430675 2637 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 23:48:57.504547 kubelet[2637]: E0912 23:48:57.503683 2637 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:57.504756 kubelet[2637]: E0912 23:48:57.504725 2637 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:57.534549 kubelet[2637]: I0912 23:48:57.534461 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.534444933 podStartE2EDuration="1.534444933s" podCreationTimestamp="2025-09-12 23:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:48:57.521807253 +0000 UTC m=+1.173338540" watchObservedRunningTime="2025-09-12 23:48:57.534444933 +0000 UTC m=+1.185976220" Sep 12 23:48:57.544517 kubelet[2637]: I0912 23:48:57.544431 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.544387853 podStartE2EDuration="1.544387853s" podCreationTimestamp="2025-09-12 23:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:48:57.54400698 +0000 UTC m=+1.195538267" watchObservedRunningTime="2025-09-12 23:48:57.544387853 +0000 UTC m=+1.195919180" Sep 12 23:48:57.544683 kubelet[2637]: I0912 23:48:57.544542 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.544537002 podStartE2EDuration="1.544537002s" podCreationTimestamp="2025-09-12 23:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:48:57.534397684 +0000 UTC m=+1.185928971" watchObservedRunningTime="2025-09-12 23:48:57.544537002 +0000 UTC m=+1.196068289" Sep 12 23:49:01.270977 kubelet[2637]: I0912 23:49:01.270936 2637 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:49:01.271478 containerd[1532]: time="2025-09-12T23:49:01.271390294Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:49:01.271727 kubelet[2637]: I0912 23:49:01.271706 2637 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:49:02.225987 systemd[1]: Created slice kubepods-besteffort-podd82b6614_c273_4f09_b70e_44b2f9c84b3b.slice - libcontainer container kubepods-besteffort-podd82b6614_c273_4f09_b70e_44b2f9c84b3b.slice. Sep 12 23:49:02.264015 kubelet[2637]: I0912 23:49:02.263969 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d82b6614-c273-4f09-b70e-44b2f9c84b3b-lib-modules\") pod \"kube-proxy-rwh6g\" (UID: \"d82b6614-c273-4f09-b70e-44b2f9c84b3b\") " pod="kube-system/kube-proxy-rwh6g" Sep 12 23:49:02.264015 kubelet[2637]: I0912 23:49:02.264017 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d82b6614-c273-4f09-b70e-44b2f9c84b3b-kube-proxy\") pod \"kube-proxy-rwh6g\" (UID: \"d82b6614-c273-4f09-b70e-44b2f9c84b3b\") " pod="kube-system/kube-proxy-rwh6g" Sep 12 23:49:02.264185 kubelet[2637]: I0912 23:49:02.264035 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d82b6614-c273-4f09-b70e-44b2f9c84b3b-xtables-lock\") pod \"kube-proxy-rwh6g\" (UID: \"d82b6614-c273-4f09-b70e-44b2f9c84b3b\") " pod="kube-system/kube-proxy-rwh6g" Sep 12 23:49:02.264185 kubelet[2637]: I0912 23:49:02.264053 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml228\" (UniqueName: \"kubernetes.io/projected/d82b6614-c273-4f09-b70e-44b2f9c84b3b-kube-api-access-ml228\") pod \"kube-proxy-rwh6g\" (UID: \"d82b6614-c273-4f09-b70e-44b2f9c84b3b\") " pod="kube-system/kube-proxy-rwh6g" Sep 12 23:49:02.384714 kubelet[2637]: W0912 23:49:02.384675 2637 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Sep 12 23:49:02.384714 kubelet[2637]: E0912 23:49:02.384714 2637 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 23:49:02.388053 kubelet[2637]: W0912 23:49:02.386311 2637 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Sep 12 23:49:02.388053 kubelet[2637]: E0912 23:49:02.386333 2637 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 12 23:49:02.396974 systemd[1]: Created slice kubepods-besteffort-pod271eff3c_6908_43c6_a94c_728177c05ced.slice - libcontainer container kubepods-besteffort-pod271eff3c_6908_43c6_a94c_728177c05ced.slice. Sep 12 23:49:02.466117 kubelet[2637]: I0912 23:49:02.466072 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/271eff3c-6908-43c6-a94c-728177c05ced-var-lib-calico\") pod \"tigera-operator-58fc44c59b-4z5pw\" (UID: \"271eff3c-6908-43c6-a94c-728177c05ced\") " pod="tigera-operator/tigera-operator-58fc44c59b-4z5pw" Sep 12 23:49:02.466117 kubelet[2637]: I0912 23:49:02.466113 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ffxz\" (UniqueName: \"kubernetes.io/projected/271eff3c-6908-43c6-a94c-728177c05ced-kube-api-access-8ffxz\") pod \"tigera-operator-58fc44c59b-4z5pw\" (UID: \"271eff3c-6908-43c6-a94c-728177c05ced\") " pod="tigera-operator/tigera-operator-58fc44c59b-4z5pw" Sep 12 23:49:02.535522 containerd[1532]: time="2025-09-12T23:49:02.535411807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwh6g,Uid:d82b6614-c273-4f09-b70e-44b2f9c84b3b,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:02.549805 containerd[1532]: time="2025-09-12T23:49:02.549745269Z" level=info msg="connecting to shim e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0" address="unix:///run/containerd/s/e3b33cdb58eddc46b9c8aa01b99dec7c17c34f68e575feb7e982f598f5f453cb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:02.573330 systemd[1]: Started cri-containerd-e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0.scope - libcontainer container e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0. Sep 12 23:49:02.594458 containerd[1532]: time="2025-09-12T23:49:02.594418846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rwh6g,Uid:d82b6614-c273-4f09-b70e-44b2f9c84b3b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0\"" Sep 12 23:49:02.597184 containerd[1532]: time="2025-09-12T23:49:02.596968558Z" level=info msg="CreateContainer within sandbox \"e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:49:02.607670 containerd[1532]: time="2025-09-12T23:49:02.607637167Z" level=info msg="Container db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:02.614405 containerd[1532]: time="2025-09-12T23:49:02.614370098Z" level=info msg="CreateContainer within sandbox \"e3d560bf3c4cdb34342ca251b50a5bbbd166a2f6a9c50d95eca27201fdca07c0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e\"" Sep 12 23:49:02.614983 containerd[1532]: time="2025-09-12T23:49:02.614951631Z" level=info msg="StartContainer for \"db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e\"" Sep 12 23:49:02.616367 containerd[1532]: time="2025-09-12T23:49:02.616329796Z" level=info msg="connecting to shim db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e" address="unix:///run/containerd/s/e3b33cdb58eddc46b9c8aa01b99dec7c17c34f68e575feb7e982f598f5f453cb" protocol=ttrpc version=3 Sep 12 23:49:02.637425 systemd[1]: Started cri-containerd-db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e.scope - libcontainer container db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e. Sep 12 23:49:02.669585 containerd[1532]: time="2025-09-12T23:49:02.669328970Z" level=info msg="StartContainer for \"db71efcae454088a1119c2074d0dbaa51bb7b4971a5ef18dffb1ade35a96488e\" returns successfully" Sep 12 23:49:03.576549 kubelet[2637]: E0912 23:49:03.576468 2637 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 23:49:03.576549 kubelet[2637]: E0912 23:49:03.576507 2637 projected.go:194] Error preparing data for projected volume kube-api-access-8ffxz for pod tigera-operator/tigera-operator-58fc44c59b-4z5pw: failed to sync configmap cache: timed out waiting for the condition Sep 12 23:49:03.577500 kubelet[2637]: E0912 23:49:03.577140 2637 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/271eff3c-6908-43c6-a94c-728177c05ced-kube-api-access-8ffxz podName:271eff3c-6908-43c6-a94c-728177c05ced nodeName:}" failed. No retries permitted until 2025-09-12 23:49:04.077110477 +0000 UTC m=+7.728641764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8ffxz" (UniqueName: "kubernetes.io/projected/271eff3c-6908-43c6-a94c-728177c05ced-kube-api-access-8ffxz") pod "tigera-operator-58fc44c59b-4z5pw" (UID: "271eff3c-6908-43c6-a94c-728177c05ced") : failed to sync configmap cache: timed out waiting for the condition Sep 12 23:49:04.201043 containerd[1532]: time="2025-09-12T23:49:04.200996307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-4z5pw,Uid:271eff3c-6908-43c6-a94c-728177c05ced,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:49:04.219339 containerd[1532]: time="2025-09-12T23:49:04.219300153Z" level=info msg="connecting to shim 02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5" address="unix:///run/containerd/s/cec4c0931c3c73df6ad235f94993ca77fe35b342fd465f773f4dabdc3766d083" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:04.247418 systemd[1]: Started cri-containerd-02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5.scope - libcontainer container 02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5. Sep 12 23:49:04.280618 containerd[1532]: time="2025-09-12T23:49:04.280579209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-4z5pw,Uid:271eff3c-6908-43c6-a94c-728177c05ced,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5\"" Sep 12 23:49:04.283322 containerd[1532]: time="2025-09-12T23:49:04.283276388Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:49:05.669336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2382406387.mount: Deactivated successfully. Sep 12 23:49:05.720947 kubelet[2637]: I0912 23:49:05.720720 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rwh6g" podStartSLOduration=3.7207032 podStartE2EDuration="3.7207032s" podCreationTimestamp="2025-09-12 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:03.510312662 +0000 UTC m=+7.161843949" watchObservedRunningTime="2025-09-12 23:49:05.7207032 +0000 UTC m=+9.372234487" Sep 12 23:49:06.062781 containerd[1532]: time="2025-09-12T23:49:06.062613377Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:06.064452 containerd[1532]: time="2025-09-12T23:49:06.064416869Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:49:06.065173 containerd[1532]: time="2025-09-12T23:49:06.065104559Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:06.067395 containerd[1532]: time="2025-09-12T23:49:06.067362243Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:06.068580 containerd[1532]: time="2025-09-12T23:49:06.068455842Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.785129011s" Sep 12 23:49:06.068580 containerd[1532]: time="2025-09-12T23:49:06.068488045Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:49:06.072791 containerd[1532]: time="2025-09-12T23:49:06.072766476Z" level=info msg="CreateContainer within sandbox \"02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:49:06.079906 containerd[1532]: time="2025-09-12T23:49:06.079381717Z" level=info msg="Container 04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:06.086098 containerd[1532]: time="2025-09-12T23:49:06.086062723Z" level=info msg="CreateContainer within sandbox \"02894a834c2431366b4f363673af4b4fecf2a9b4319cd23bb85defcdac7a35e5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81\"" Sep 12 23:49:06.086598 containerd[1532]: time="2025-09-12T23:49:06.086567840Z" level=info msg="StartContainer for \"04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81\"" Sep 12 23:49:06.088920 containerd[1532]: time="2025-09-12T23:49:06.088888129Z" level=info msg="connecting to shim 04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81" address="unix:///run/containerd/s/cec4c0931c3c73df6ad235f94993ca77fe35b342fd465f773f4dabdc3766d083" protocol=ttrpc version=3 Sep 12 23:49:06.107324 systemd[1]: Started cri-containerd-04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81.scope - libcontainer container 04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81. Sep 12 23:49:06.133276 containerd[1532]: time="2025-09-12T23:49:06.133209392Z" level=info msg="StartContainer for \"04f6c3dab6d99cf779b40800c55cd3b48dafc8a6604a043c33cd581d33b18c81\" returns successfully" Sep 12 23:49:06.577370 kubelet[2637]: I0912 23:49:06.577301 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-4z5pw" podStartSLOduration=2.7874235240000003 podStartE2EDuration="4.577283452s" podCreationTimestamp="2025-09-12 23:49:02 +0000 UTC" firstStartedPulling="2025-09-12 23:49:04.281657216 +0000 UTC m=+7.933192944" lastFinishedPulling="2025-09-12 23:49:06.071521585 +0000 UTC m=+9.723052872" observedRunningTime="2025-09-12 23:49:06.540444732 +0000 UTC m=+10.191976099" watchObservedRunningTime="2025-09-12 23:49:06.577283452 +0000 UTC m=+10.228814739" Sep 12 23:49:10.387017 sudo[1731]: pam_unix(sudo:session): session closed for user root Sep 12 23:49:10.388740 sshd[1730]: Connection closed by 10.0.0.1 port 34346 Sep 12 23:49:10.389511 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:10.395675 systemd[1]: sshd@6-10.0.0.100:22-10.0.0.1:34346.service: Deactivated successfully. Sep 12 23:49:10.398778 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:49:10.400068 systemd[1]: session-7.scope: Consumed 5.786s CPU time, 227.5M memory peak. Sep 12 23:49:10.401625 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:49:10.403062 systemd-logind[1503]: Removed session 7. Sep 12 23:49:12.482778 update_engine[1504]: I20250912 23:49:12.482228 1504 update_attempter.cc:509] Updating boot flags... Sep 12 23:49:16.321456 systemd[1]: Created slice kubepods-besteffort-podfdf9f891_d275_4fef_b1f2_80eb6c4786c9.slice - libcontainer container kubepods-besteffort-podfdf9f891_d275_4fef_b1f2_80eb6c4786c9.slice. Sep 12 23:49:16.458173 kubelet[2637]: I0912 23:49:16.458107 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fdf9f891-d275-4fef-b1f2-80eb6c4786c9-typha-certs\") pod \"calico-typha-98df8c6b-8c9v5\" (UID: \"fdf9f891-d275-4fef-b1f2-80eb6c4786c9\") " pod="calico-system/calico-typha-98df8c6b-8c9v5" Sep 12 23:49:16.458173 kubelet[2637]: I0912 23:49:16.458181 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fdf9f891-d275-4fef-b1f2-80eb6c4786c9-tigera-ca-bundle\") pod \"calico-typha-98df8c6b-8c9v5\" (UID: \"fdf9f891-d275-4fef-b1f2-80eb6c4786c9\") " pod="calico-system/calico-typha-98df8c6b-8c9v5" Sep 12 23:49:16.458588 kubelet[2637]: I0912 23:49:16.458205 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crx6\" (UniqueName: \"kubernetes.io/projected/fdf9f891-d275-4fef-b1f2-80eb6c4786c9-kube-api-access-7crx6\") pod \"calico-typha-98df8c6b-8c9v5\" (UID: \"fdf9f891-d275-4fef-b1f2-80eb6c4786c9\") " pod="calico-system/calico-typha-98df8c6b-8c9v5" Sep 12 23:49:16.625952 containerd[1532]: time="2025-09-12T23:49:16.625906804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98df8c6b-8c9v5,Uid:fdf9f891-d275-4fef-b1f2-80eb6c4786c9,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:16.664855 containerd[1532]: time="2025-09-12T23:49:16.664333876Z" level=info msg="connecting to shim 7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193" address="unix:///run/containerd/s/cc8b745aa5f80ea643c9a17de3fecc9f6c9cb40ea4d7857f86b5e7ea25f37eb7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:16.704085 systemd[1]: Created slice kubepods-besteffort-pod183d4803_e393_4ecb_a02b_0957868ae08b.slice - libcontainer container kubepods-besteffort-pod183d4803_e393_4ecb_a02b_0957868ae08b.slice. Sep 12 23:49:16.737425 systemd[1]: Started cri-containerd-7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193.scope - libcontainer container 7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193. Sep 12 23:49:16.784623 containerd[1532]: time="2025-09-12T23:49:16.784583430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-98df8c6b-8c9v5,Uid:fdf9f891-d275-4fef-b1f2-80eb6c4786c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193\"" Sep 12 23:49:16.789336 containerd[1532]: time="2025-09-12T23:49:16.789304516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:49:16.860925 kubelet[2637]: I0912 23:49:16.860885 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-cni-bin-dir\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.860925 kubelet[2637]: I0912 23:49:16.860930 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/183d4803-e393-4ecb-a02b-0957868ae08b-tigera-ca-bundle\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861082 kubelet[2637]: I0912 23:49:16.860947 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-var-lib-calico\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861082 kubelet[2637]: I0912 23:49:16.860964 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-cni-log-dir\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861082 kubelet[2637]: I0912 23:49:16.860984 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-lib-modules\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861082 kubelet[2637]: I0912 23:49:16.860998 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8pbc\" (UniqueName: \"kubernetes.io/projected/183d4803-e393-4ecb-a02b-0957868ae08b-kube-api-access-m8pbc\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861082 kubelet[2637]: I0912 23:49:16.861018 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/183d4803-e393-4ecb-a02b-0957868ae08b-node-certs\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861229 kubelet[2637]: I0912 23:49:16.861068 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-cni-net-dir\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861229 kubelet[2637]: I0912 23:49:16.861114 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-policysync\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861229 kubelet[2637]: I0912 23:49:16.861145 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-var-run-calico\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861229 kubelet[2637]: I0912 23:49:16.861183 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-xtables-lock\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.861229 kubelet[2637]: I0912 23:49:16.861223 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/183d4803-e393-4ecb-a02b-0957868ae08b-flexvol-driver-host\") pod \"calico-node-v9d7q\" (UID: \"183d4803-e393-4ecb-a02b-0957868ae08b\") " pod="calico-system/calico-node-v9d7q" Sep 12 23:49:16.976226 kubelet[2637]: E0912 23:49:16.975906 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:16.976226 kubelet[2637]: W0912 23:49:16.976109 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:16.976412 kubelet[2637]: E0912 23:49:16.976355 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:16.983611 kubelet[2637]: E0912 23:49:16.982274 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:16.983611 kubelet[2637]: W0912 23:49:16.982297 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:16.983611 kubelet[2637]: E0912 23:49:16.982318 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:16.988733 kubelet[2637]: E0912 23:49:16.988690 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrls7" podUID="842c38e2-26c7-4aa4-8a44-a5b8ff4a773e" Sep 12 23:49:17.008426 containerd[1532]: time="2025-09-12T23:49:17.008210348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9d7q,Uid:183d4803-e393-4ecb-a02b-0957868ae08b,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:17.034756 containerd[1532]: time="2025-09-12T23:49:17.034713248Z" level=info msg="connecting to shim adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2" address="unix:///run/containerd/s/d7eebc575ebbee7a2d2e4e45500a6a2b9fe28a3051821974a67e8b16d0e9c75f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:17.063340 kubelet[2637]: E0912 23:49:17.063238 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.063340 kubelet[2637]: W0912 23:49:17.063337 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.063483 kubelet[2637]: E0912 23:49:17.063356 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.063556 kubelet[2637]: E0912 23:49:17.063546 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.063592 kubelet[2637]: W0912 23:49:17.063558 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.063592 kubelet[2637]: E0912 23:49:17.063567 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.063713 kubelet[2637]: E0912 23:49:17.063704 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.063738 kubelet[2637]: W0912 23:49:17.063713 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.063738 kubelet[2637]: E0912 23:49:17.063721 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.063845 kubelet[2637]: E0912 23:49:17.063836 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.063872 kubelet[2637]: W0912 23:49:17.063848 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.063872 kubelet[2637]: E0912 23:49:17.063856 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064014 kubelet[2637]: E0912 23:49:17.064002 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064014 kubelet[2637]: W0912 23:49:17.064013 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.064077 kubelet[2637]: E0912 23:49:17.064021 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064154 kubelet[2637]: E0912 23:49:17.064144 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064216 kubelet[2637]: W0912 23:49:17.064154 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.064216 kubelet[2637]: E0912 23:49:17.064176 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064307 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064925 kubelet[2637]: W0912 23:49:17.064317 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064327 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064527 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064925 kubelet[2637]: W0912 23:49:17.064537 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064546 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064732 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064925 kubelet[2637]: W0912 23:49:17.064740 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064747 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.064925 kubelet[2637]: E0912 23:49:17.064874 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.064362 systemd[1]: Started cri-containerd-adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2.scope - libcontainer container adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2. Sep 12 23:49:17.065241 kubelet[2637]: W0912 23:49:17.064882 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065241 kubelet[2637]: E0912 23:49:17.064889 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065241 kubelet[2637]: E0912 23:49:17.065204 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065241 kubelet[2637]: W0912 23:49:17.065214 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065241 kubelet[2637]: E0912 23:49:17.065224 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065378 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065731 kubelet[2637]: W0912 23:49:17.065391 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065400 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065549 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065731 kubelet[2637]: W0912 23:49:17.065570 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065578 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065696 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065731 kubelet[2637]: W0912 23:49:17.065703 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065731 kubelet[2637]: E0912 23:49:17.065711 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065940 kubelet[2637]: E0912 23:49:17.065842 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065940 kubelet[2637]: W0912 23:49:17.065852 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.065940 kubelet[2637]: E0912 23:49:17.065859 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.065996 kubelet[2637]: E0912 23:49:17.065982 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.065996 kubelet[2637]: W0912 23:49:17.065990 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.066034 kubelet[2637]: E0912 23:49:17.065998 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.066390 kubelet[2637]: E0912 23:49:17.066374 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.066390 kubelet[2637]: W0912 23:49:17.066386 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.066459 kubelet[2637]: E0912 23:49:17.066400 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.066547 kubelet[2637]: E0912 23:49:17.066525 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.066547 kubelet[2637]: W0912 23:49:17.066537 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.066768 kubelet[2637]: E0912 23:49:17.066558 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.066768 kubelet[2637]: E0912 23:49:17.066676 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.066768 kubelet[2637]: W0912 23:49:17.066684 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.066768 kubelet[2637]: E0912 23:49:17.066691 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.066873 kubelet[2637]: E0912 23:49:17.066823 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.066873 kubelet[2637]: W0912 23:49:17.066831 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.066873 kubelet[2637]: E0912 23:49:17.066839 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.117761 containerd[1532]: time="2025-09-12T23:49:17.117719894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9d7q,Uid:183d4803-e393-4ecb-a02b-0957868ae08b,Namespace:calico-system,Attempt:0,} returns sandbox id \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\"" Sep 12 23:49:17.163894 kubelet[2637]: E0912 23:49:17.163856 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.163894 kubelet[2637]: W0912 23:49:17.163885 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.164126 kubelet[2637]: E0912 23:49:17.163909 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.164126 kubelet[2637]: I0912 23:49:17.163941 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6fpw\" (UniqueName: \"kubernetes.io/projected/842c38e2-26c7-4aa4-8a44-a5b8ff4a773e-kube-api-access-g6fpw\") pod \"csi-node-driver-rrls7\" (UID: \"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e\") " pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:17.164759 kubelet[2637]: E0912 23:49:17.164208 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.164759 kubelet[2637]: W0912 23:49:17.164219 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.164759 kubelet[2637]: E0912 23:49:17.164235 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.164759 kubelet[2637]: I0912 23:49:17.164276 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/842c38e2-26c7-4aa4-8a44-a5b8ff4a773e-registration-dir\") pod \"csi-node-driver-rrls7\" (UID: \"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e\") " pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:17.164759 kubelet[2637]: E0912 23:49:17.164507 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.164759 kubelet[2637]: W0912 23:49:17.164518 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.164759 kubelet[2637]: E0912 23:49:17.164529 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.164759 kubelet[2637]: I0912 23:49:17.164544 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/842c38e2-26c7-4aa4-8a44-a5b8ff4a773e-kubelet-dir\") pod \"csi-node-driver-rrls7\" (UID: \"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e\") " pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:17.165035 kubelet[2637]: E0912 23:49:17.164786 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.165035 kubelet[2637]: W0912 23:49:17.164797 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.165035 kubelet[2637]: E0912 23:49:17.164809 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.165035 kubelet[2637]: I0912 23:49:17.164850 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/842c38e2-26c7-4aa4-8a44-a5b8ff4a773e-socket-dir\") pod \"csi-node-driver-rrls7\" (UID: \"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e\") " pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:17.167912 kubelet[2637]: E0912 23:49:17.167886 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.168049 kubelet[2637]: W0912 23:49:17.167924 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.168049 kubelet[2637]: E0912 23:49:17.167948 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.168187 kubelet[2637]: E0912 23:49:17.168154 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.168226 kubelet[2637]: W0912 23:49:17.168195 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.168410 kubelet[2637]: E0912 23:49:17.168359 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.168610 kubelet[2637]: E0912 23:49:17.168595 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.168657 kubelet[2637]: W0912 23:49:17.168611 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.168723 kubelet[2637]: E0912 23:49:17.168710 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.169115 kubelet[2637]: E0912 23:49:17.169098 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.169115 kubelet[2637]: W0912 23:49:17.169114 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.169355 kubelet[2637]: E0912 23:49:17.169241 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.169355 kubelet[2637]: I0912 23:49:17.169279 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/842c38e2-26c7-4aa4-8a44-a5b8ff4a773e-varrun\") pod \"csi-node-driver-rrls7\" (UID: \"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e\") " pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:17.169452 kubelet[2637]: E0912 23:49:17.169367 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.169452 kubelet[2637]: W0912 23:49:17.169377 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.169452 kubelet[2637]: E0912 23:49:17.169410 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.169564 kubelet[2637]: E0912 23:49:17.169552 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.169564 kubelet[2637]: W0912 23:49:17.169563 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.169638 kubelet[2637]: E0912 23:49:17.169582 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.169743 kubelet[2637]: E0912 23:49:17.169733 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.169743 kubelet[2637]: W0912 23:49:17.169742 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.169792 kubelet[2637]: E0912 23:49:17.169752 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.169949 kubelet[2637]: E0912 23:49:17.169937 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.169949 kubelet[2637]: W0912 23:49:17.169948 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.170067 kubelet[2637]: E0912 23:49:17.169960 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.170107 kubelet[2637]: E0912 23:49:17.170094 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.170107 kubelet[2637]: W0912 23:49:17.170101 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.170158 kubelet[2637]: E0912 23:49:17.170108 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.170305 kubelet[2637]: E0912 23:49:17.170293 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.170305 kubelet[2637]: W0912 23:49:17.170305 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.170366 kubelet[2637]: E0912 23:49:17.170314 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.170474 kubelet[2637]: E0912 23:49:17.170464 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.170474 kubelet[2637]: W0912 23:49:17.170474 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.170516 kubelet[2637]: E0912 23:49:17.170482 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.271147 kubelet[2637]: E0912 23:49:17.270933 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.271147 kubelet[2637]: W0912 23:49:17.270958 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.271147 kubelet[2637]: E0912 23:49:17.270977 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.271371 kubelet[2637]: E0912 23:49:17.271333 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.271371 kubelet[2637]: W0912 23:49:17.271347 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.271371 kubelet[2637]: E0912 23:49:17.271364 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.272428 kubelet[2637]: E0912 23:49:17.272408 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.272428 kubelet[2637]: W0912 23:49:17.272421 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.272428 kubelet[2637]: E0912 23:49:17.272437 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.273666 kubelet[2637]: E0912 23:49:17.273442 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.273666 kubelet[2637]: W0912 23:49:17.273659 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.273861 kubelet[2637]: E0912 23:49:17.273762 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.274286 kubelet[2637]: E0912 23:49:17.274266 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.274286 kubelet[2637]: W0912 23:49:17.274282 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.274510 kubelet[2637]: E0912 23:49:17.274402 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.274510 kubelet[2637]: E0912 23:49:17.274432 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.274814 kubelet[2637]: W0912 23:49:17.274785 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.275221 kubelet[2637]: E0912 23:49:17.274833 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.275221 kubelet[2637]: E0912 23:49:17.275052 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.275221 kubelet[2637]: W0912 23:49:17.275062 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.275221 kubelet[2637]: E0912 23:49:17.275095 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.275319 kubelet[2637]: E0912 23:49:17.275284 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.275319 kubelet[2637]: W0912 23:49:17.275293 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.275423 kubelet[2637]: E0912 23:49:17.275359 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.275614 kubelet[2637]: E0912 23:49:17.275579 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.275614 kubelet[2637]: W0912 23:49:17.275593 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.275707 kubelet[2637]: E0912 23:49:17.275682 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.275746 kubelet[2637]: E0912 23:49:17.275726 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.275746 kubelet[2637]: W0912 23:49:17.275735 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.275839 kubelet[2637]: E0912 23:49:17.275787 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.276087 kubelet[2637]: E0912 23:49:17.276045 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.276087 kubelet[2637]: W0912 23:49:17.276068 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.276155 kubelet[2637]: E0912 23:49:17.276086 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.276351 kubelet[2637]: E0912 23:49:17.276295 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.276351 kubelet[2637]: W0912 23:49:17.276307 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.276351 kubelet[2637]: E0912 23:49:17.276319 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.276622 kubelet[2637]: E0912 23:49:17.276590 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.276622 kubelet[2637]: W0912 23:49:17.276610 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.276838 kubelet[2637]: E0912 23:49:17.276732 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.276898 kubelet[2637]: E0912 23:49:17.276878 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.276898 kubelet[2637]: W0912 23:49:17.276893 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.277074 kubelet[2637]: E0912 23:49:17.277043 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.277394 kubelet[2637]: E0912 23:49:17.277346 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.277394 kubelet[2637]: W0912 23:49:17.277361 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.277456 kubelet[2637]: E0912 23:49:17.277426 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.277566 kubelet[2637]: E0912 23:49:17.277544 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.277566 kubelet[2637]: W0912 23:49:17.277559 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.277622 kubelet[2637]: E0912 23:49:17.277579 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.281140 kubelet[2637]: E0912 23:49:17.281027 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.281140 kubelet[2637]: W0912 23:49:17.281046 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.281140 kubelet[2637]: E0912 23:49:17.281092 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.281276 kubelet[2637]: E0912 23:49:17.281205 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.281276 kubelet[2637]: W0912 23:49:17.281214 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.281276 kubelet[2637]: E0912 23:49:17.281262 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.281634 kubelet[2637]: E0912 23:49:17.281594 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.281634 kubelet[2637]: W0912 23:49:17.281611 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.281702 kubelet[2637]: E0912 23:49:17.281673 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.281877 kubelet[2637]: E0912 23:49:17.281833 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.281877 kubelet[2637]: W0912 23:49:17.281844 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.281951 kubelet[2637]: E0912 23:49:17.281932 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.282086 kubelet[2637]: E0912 23:49:17.282072 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.282086 kubelet[2637]: W0912 23:49:17.282086 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.282153 kubelet[2637]: E0912 23:49:17.282110 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.282319 kubelet[2637]: E0912 23:49:17.282301 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.282319 kubelet[2637]: W0912 23:49:17.282314 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.282382 kubelet[2637]: E0912 23:49:17.282329 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.282496 kubelet[2637]: E0912 23:49:17.282483 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.282496 kubelet[2637]: W0912 23:49:17.282496 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.282547 kubelet[2637]: E0912 23:49:17.282509 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.282978 kubelet[2637]: E0912 23:49:17.282949 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.282978 kubelet[2637]: W0912 23:49:17.282964 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.283101 kubelet[2637]: E0912 23:49:17.282994 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.283621 kubelet[2637]: E0912 23:49:17.283195 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.283621 kubelet[2637]: W0912 23:49:17.283209 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.283621 kubelet[2637]: E0912 23:49:17.283219 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.295403 kubelet[2637]: E0912 23:49:17.295366 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:17.295403 kubelet[2637]: W0912 23:49:17.295389 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:17.295403 kubelet[2637]: E0912 23:49:17.295406 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:17.713612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629483796.mount: Deactivated successfully. Sep 12 23:49:18.228441 containerd[1532]: time="2025-09-12T23:49:18.228373090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:18.229634 containerd[1532]: time="2025-09-12T23:49:18.229609499Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:49:18.230570 containerd[1532]: time="2025-09-12T23:49:18.230551057Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:18.234318 containerd[1532]: time="2025-09-12T23:49:18.233875028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:18.234550 containerd[1532]: time="2025-09-12T23:49:18.234496333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.445147095s" Sep 12 23:49:18.234590 containerd[1532]: time="2025-09-12T23:49:18.234551655Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:49:18.236842 containerd[1532]: time="2025-09-12T23:49:18.235785064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:49:18.251068 containerd[1532]: time="2025-09-12T23:49:18.250013268Z" level=info msg="CreateContainer within sandbox \"7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:49:18.258489 containerd[1532]: time="2025-09-12T23:49:18.258421361Z" level=info msg="Container 56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:18.265993 containerd[1532]: time="2025-09-12T23:49:18.265947459Z" level=info msg="CreateContainer within sandbox \"7c9b5fc0fd5dd63fd07bae5e55002112ee7bfbea2ceee468731a4471757e4193\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07\"" Sep 12 23:49:18.266529 containerd[1532]: time="2025-09-12T23:49:18.266494721Z" level=info msg="StartContainer for \"56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07\"" Sep 12 23:49:18.267674 containerd[1532]: time="2025-09-12T23:49:18.267603045Z" level=info msg="connecting to shim 56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07" address="unix:///run/containerd/s/cc8b745aa5f80ea643c9a17de3fecc9f6c9cb40ea4d7857f86b5e7ea25f37eb7" protocol=ttrpc version=3 Sep 12 23:49:18.288341 systemd[1]: Started cri-containerd-56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07.scope - libcontainer container 56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07. Sep 12 23:49:18.332252 containerd[1532]: time="2025-09-12T23:49:18.332145602Z" level=info msg="StartContainer for \"56d8ed27766f6a2409bf1948e1556ab05a65e053c145ae2da017210f079ada07\" returns successfully" Sep 12 23:49:18.468816 kubelet[2637]: E0912 23:49:18.468755 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrls7" podUID="842c38e2-26c7-4aa4-8a44-a5b8ff4a773e" Sep 12 23:49:18.559320 kubelet[2637]: I0912 23:49:18.558618 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-98df8c6b-8c9v5" podStartSLOduration=1.11106458 podStartE2EDuration="2.558600935s" podCreationTimestamp="2025-09-12 23:49:16 +0000 UTC" firstStartedPulling="2025-09-12 23:49:16.787856333 +0000 UTC m=+20.439387580" lastFinishedPulling="2025-09-12 23:49:18.235392648 +0000 UTC m=+21.886923935" observedRunningTime="2025-09-12 23:49:18.558336525 +0000 UTC m=+22.209867852" watchObservedRunningTime="2025-09-12 23:49:18.558600935 +0000 UTC m=+22.210132222" Sep 12 23:49:18.576694 kubelet[2637]: E0912 23:49:18.576647 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.576694 kubelet[2637]: W0912 23:49:18.576686 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.576843 kubelet[2637]: E0912 23:49:18.576706 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.576867 kubelet[2637]: E0912 23:49:18.576856 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.576867 kubelet[2637]: W0912 23:49:18.576863 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.576916 kubelet[2637]: E0912 23:49:18.576873 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577023 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.578240 kubelet[2637]: W0912 23:49:18.577033 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577041 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577191 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.578240 kubelet[2637]: W0912 23:49:18.577199 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577208 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577434 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.578240 kubelet[2637]: W0912 23:49:18.577444 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577454 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.578240 kubelet[2637]: E0912 23:49:18.577591 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.578536 kubelet[2637]: W0912 23:49:18.577598 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.578536 kubelet[2637]: E0912 23:49:18.577605 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.580127 kubelet[2637]: E0912 23:49:18.580102 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.580127 kubelet[2637]: W0912 23:49:18.580119 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.580127 kubelet[2637]: E0912 23:49:18.580132 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.580334 kubelet[2637]: E0912 23:49:18.580321 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.580334 kubelet[2637]: W0912 23:49:18.580331 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.580402 kubelet[2637]: E0912 23:49:18.580339 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.580536 kubelet[2637]: E0912 23:49:18.580519 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.580536 kubelet[2637]: W0912 23:49:18.580530 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.580599 kubelet[2637]: E0912 23:49:18.580539 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.580727 kubelet[2637]: E0912 23:49:18.580714 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.580727 kubelet[2637]: W0912 23:49:18.580726 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.580792 kubelet[2637]: E0912 23:49:18.580737 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.580936 kubelet[2637]: E0912 23:49:18.580920 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.580936 kubelet[2637]: W0912 23:49:18.580933 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.580995 kubelet[2637]: E0912 23:49:18.580943 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.581104 kubelet[2637]: E0912 23:49:18.581090 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.581104 kubelet[2637]: W0912 23:49:18.581101 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.581195 kubelet[2637]: E0912 23:49:18.581110 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.581279 kubelet[2637]: E0912 23:49:18.581265 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.581279 kubelet[2637]: W0912 23:49:18.581276 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.581344 kubelet[2637]: E0912 23:49:18.581284 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.581465 kubelet[2637]: E0912 23:49:18.581446 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.581465 kubelet[2637]: W0912 23:49:18.581460 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.581568 kubelet[2637]: E0912 23:49:18.581469 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.581618 kubelet[2637]: E0912 23:49:18.581610 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.581639 kubelet[2637]: W0912 23:49:18.581619 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.581639 kubelet[2637]: E0912 23:49:18.581628 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.586701 kubelet[2637]: E0912 23:49:18.586562 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.586701 kubelet[2637]: W0912 23:49:18.586580 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.586701 kubelet[2637]: E0912 23:49:18.586593 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.586900 kubelet[2637]: E0912 23:49:18.586887 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.587141 kubelet[2637]: W0912 23:49:18.586970 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.587141 kubelet[2637]: E0912 23:49:18.586994 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.587381 kubelet[2637]: E0912 23:49:18.587366 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.587646 kubelet[2637]: W0912 23:49:18.587435 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.587646 kubelet[2637]: E0912 23:49:18.587458 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.587781 kubelet[2637]: E0912 23:49:18.587767 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.587835 kubelet[2637]: W0912 23:49:18.587825 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.587898 kubelet[2637]: E0912 23:49:18.587888 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.588129 kubelet[2637]: E0912 23:49:18.588110 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.588200 kubelet[2637]: W0912 23:49:18.588128 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.588200 kubelet[2637]: E0912 23:49:18.588151 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.588359 kubelet[2637]: E0912 23:49:18.588347 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.588359 kubelet[2637]: W0912 23:49:18.588358 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.588429 kubelet[2637]: E0912 23:49:18.588405 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.588539 kubelet[2637]: E0912 23:49:18.588528 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.588572 kubelet[2637]: W0912 23:49:18.588539 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.588598 kubelet[2637]: E0912 23:49:18.588570 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.588674 kubelet[2637]: E0912 23:49:18.588664 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.588704 kubelet[2637]: W0912 23:49:18.588674 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.588704 kubelet[2637]: E0912 23:49:18.588699 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.588837 kubelet[2637]: E0912 23:49:18.588827 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.588837 kubelet[2637]: W0912 23:49:18.588837 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.588887 kubelet[2637]: E0912 23:49:18.588850 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.589020 kubelet[2637]: E0912 23:49:18.589010 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.589020 kubelet[2637]: W0912 23:49:18.589019 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.589071 kubelet[2637]: E0912 23:49:18.589031 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.589316 kubelet[2637]: E0912 23:49:18.589301 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.589316 kubelet[2637]: W0912 23:49:18.589316 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.589377 kubelet[2637]: E0912 23:49:18.589332 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.589518 kubelet[2637]: E0912 23:49:18.589505 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.589518 kubelet[2637]: W0912 23:49:18.589516 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.589575 kubelet[2637]: E0912 23:49:18.589531 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.589942 kubelet[2637]: E0912 23:49:18.589930 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.589942 kubelet[2637]: W0912 23:49:18.589943 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590008 kubelet[2637]: E0912 23:49:18.589957 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.590140 kubelet[2637]: E0912 23:49:18.590130 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.590140 kubelet[2637]: W0912 23:49:18.590140 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590219 kubelet[2637]: E0912 23:49:18.590149 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.590319 kubelet[2637]: E0912 23:49:18.590306 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.590319 kubelet[2637]: W0912 23:49:18.590317 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590546 kubelet[2637]: E0912 23:49:18.590349 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.590546 kubelet[2637]: E0912 23:49:18.590442 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.590546 kubelet[2637]: W0912 23:49:18.590449 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590546 kubelet[2637]: E0912 23:49:18.590492 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.590644 kubelet[2637]: E0912 23:49:18.590575 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.590644 kubelet[2637]: W0912 23:49:18.590583 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590644 kubelet[2637]: E0912 23:49:18.590595 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:18.590742 kubelet[2637]: E0912 23:49:18.590731 2637 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:18.590742 kubelet[2637]: W0912 23:49:18.590742 2637 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:18.590789 kubelet[2637]: E0912 23:49:18.590750 2637 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:19.259880 containerd[1532]: time="2025-09-12T23:49:19.259834024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:19.260616 containerd[1532]: time="2025-09-12T23:49:19.260591292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:49:19.261899 containerd[1532]: time="2025-09-12T23:49:19.261606011Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:19.264157 containerd[1532]: time="2025-09-12T23:49:19.264129106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:19.264839 containerd[1532]: time="2025-09-12T23:49:19.264810532Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.027964986s" Sep 12 23:49:19.264936 containerd[1532]: time="2025-09-12T23:49:19.264919536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:49:19.268417 containerd[1532]: time="2025-09-12T23:49:19.268379027Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:49:19.279195 containerd[1532]: time="2025-09-12T23:49:19.278470809Z" level=info msg="Container e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:19.285483 containerd[1532]: time="2025-09-12T23:49:19.285439873Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\"" Sep 12 23:49:19.287196 containerd[1532]: time="2025-09-12T23:49:19.286029215Z" level=info msg="StartContainer for \"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\"" Sep 12 23:49:19.287535 containerd[1532]: time="2025-09-12T23:49:19.287500951Z" level=info msg="connecting to shim e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7" address="unix:///run/containerd/s/d7eebc575ebbee7a2d2e4e45500a6a2b9fe28a3051821974a67e8b16d0e9c75f" protocol=ttrpc version=3 Sep 12 23:49:19.320383 systemd[1]: Started cri-containerd-e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7.scope - libcontainer container e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7. Sep 12 23:49:19.366727 systemd[1]: cri-containerd-e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7.scope: Deactivated successfully. Sep 12 23:49:19.367076 systemd[1]: cri-containerd-e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7.scope: Consumed 28ms CPU time, 6.3M memory peak, 4.5M written to disk. Sep 12 23:49:19.408362 containerd[1532]: time="2025-09-12T23:49:19.408289164Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\" id:\"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\" pid:3332 exited_at:{seconds:1757720959 nanos:398509793}" Sep 12 23:49:19.530254 containerd[1532]: time="2025-09-12T23:49:19.530141456Z" level=info msg="StartContainer for \"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\" returns successfully" Sep 12 23:49:19.542288 containerd[1532]: time="2025-09-12T23:49:19.542236794Z" level=info msg="received exit event container_id:\"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\" id:\"e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7\" pid:3332 exited_at:{seconds:1757720959 nanos:398509793}" Sep 12 23:49:19.547567 kubelet[2637]: I0912 23:49:19.547487 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:19.579926 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e661b3a22402db8e23a38e7b55caa2df0872a87e4fc79911a1f062b2a993f6a7-rootfs.mount: Deactivated successfully. Sep 12 23:49:20.469060 kubelet[2637]: E0912 23:49:20.468984 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrls7" podUID="842c38e2-26c7-4aa4-8a44-a5b8ff4a773e" Sep 12 23:49:20.557815 containerd[1532]: time="2025-09-12T23:49:20.557767394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:49:22.470506 kubelet[2637]: E0912 23:49:22.469530 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rrls7" podUID="842c38e2-26c7-4aa4-8a44-a5b8ff4a773e" Sep 12 23:49:23.330091 containerd[1532]: time="2025-09-12T23:49:23.329460032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.330091 containerd[1532]: time="2025-09-12T23:49:23.329923127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:49:23.330699 containerd[1532]: time="2025-09-12T23:49:23.330673310Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.332745 containerd[1532]: time="2025-09-12T23:49:23.332711735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.333645 containerd[1532]: time="2025-09-12T23:49:23.333533561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.775696605s" Sep 12 23:49:23.333645 containerd[1532]: time="2025-09-12T23:49:23.333563282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:49:23.336745 containerd[1532]: time="2025-09-12T23:49:23.336717823Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:49:23.343705 containerd[1532]: time="2025-09-12T23:49:23.343671524Z" level=info msg="Container becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:23.352952 containerd[1532]: time="2025-09-12T23:49:23.352830295Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\"" Sep 12 23:49:23.353440 containerd[1532]: time="2025-09-12T23:49:23.353412434Z" level=info msg="StartContainer for \"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\"" Sep 12 23:49:23.354777 containerd[1532]: time="2025-09-12T23:49:23.354753317Z" level=info msg="connecting to shim becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713" address="unix:///run/containerd/s/d7eebc575ebbee7a2d2e4e45500a6a2b9fe28a3051821974a67e8b16d0e9c75f" protocol=ttrpc version=3 Sep 12 23:49:23.386376 systemd[1]: Started cri-containerd-becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713.scope - libcontainer container becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713. Sep 12 23:49:23.422643 containerd[1532]: time="2025-09-12T23:49:23.422601195Z" level=info msg="StartContainer for \"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\" returns successfully" Sep 12 23:49:23.937330 systemd[1]: cri-containerd-becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713.scope: Deactivated successfully. Sep 12 23:49:23.938432 systemd[1]: cri-containerd-becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713.scope: Consumed 470ms CPU time, 173.1M memory peak, 3.2M read from disk, 165.8M written to disk. Sep 12 23:49:23.940424 containerd[1532]: time="2025-09-12T23:49:23.939983096Z" level=info msg="received exit event container_id:\"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\" id:\"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\" pid:3392 exited_at:{seconds:1757720963 nanos:939788850}" Sep 12 23:49:23.940543 containerd[1532]: time="2025-09-12T23:49:23.940140261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\" id:\"becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713\" pid:3392 exited_at:{seconds:1757720963 nanos:939788850}" Sep 12 23:49:23.958425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-becb418db2599acd6d647b408ef3e0a7764c64810ef781a27e85e5afec13e713-rootfs.mount: Deactivated successfully. Sep 12 23:49:24.059945 kubelet[2637]: I0912 23:49:24.059878 2637 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 23:49:24.093908 systemd[1]: Created slice kubepods-burstable-pod072481d6_9cb5_4023_9a42_06a2409e0fc7.slice - libcontainer container kubepods-burstable-pod072481d6_9cb5_4023_9a42_06a2409e0fc7.slice. Sep 12 23:49:24.105730 systemd[1]: Created slice kubepods-besteffort-pod882830d1_c02f_4c0f_bbc9_595b7b524296.slice - libcontainer container kubepods-besteffort-pod882830d1_c02f_4c0f_bbc9_595b7b524296.slice. Sep 12 23:49:24.119438 systemd[1]: Created slice kubepods-burstable-pod1e734e6c_e281_45ad_9b05_72f8ed11eb0c.slice - libcontainer container kubepods-burstable-pod1e734e6c_e281_45ad_9b05_72f8ed11eb0c.slice. Sep 12 23:49:24.124900 systemd[1]: Created slice kubepods-besteffort-pod5c790fd4_d385_4fdb_9724_a15ea9c3127a.slice - libcontainer container kubepods-besteffort-pod5c790fd4_d385_4fdb_9724_a15ea9c3127a.slice. Sep 12 23:49:24.130137 systemd[1]: Created slice kubepods-besteffort-pod06064bbe_14bc_4e90_b7dd_39881485ebd6.slice - libcontainer container kubepods-besteffort-pod06064bbe_14bc_4e90_b7dd_39881485ebd6.slice. Sep 12 23:49:24.154130 kubelet[2637]: I0912 23:49:24.154093 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdgz2\" (UniqueName: \"kubernetes.io/projected/1e734e6c-e281-45ad-9b05-72f8ed11eb0c-kube-api-access-kdgz2\") pod \"coredns-7c65d6cfc9-67mtl\" (UID: \"1e734e6c-e281-45ad-9b05-72f8ed11eb0c\") " pod="kube-system/coredns-7c65d6cfc9-67mtl" Sep 12 23:49:24.154315 kubelet[2637]: I0912 23:49:24.154300 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/06064bbe-14bc-4e90-b7dd-39881485ebd6-goldmane-key-pair\") pod \"goldmane-7988f88666-9xh54\" (UID: \"06064bbe-14bc-4e90-b7dd-39881485ebd6\") " pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.154408 kubelet[2637]: I0912 23:49:24.154395 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sktsw\" (UniqueName: \"kubernetes.io/projected/9872f117-4c4f-416b-90bd-ac2c750933c0-kube-api-access-sktsw\") pod \"whisker-6d5b476d9b-2l5bk\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " pod="calico-system/whisker-6d5b476d9b-2l5bk" Sep 12 23:49:24.154493 kubelet[2637]: I0912 23:49:24.154479 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06064bbe-14bc-4e90-b7dd-39881485ebd6-config\") pod \"goldmane-7988f88666-9xh54\" (UID: \"06064bbe-14bc-4e90-b7dd-39881485ebd6\") " pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.154564 kubelet[2637]: I0912 23:49:24.154552 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f7b9\" (UniqueName: \"kubernetes.io/projected/882830d1-c02f-4c0f-bbc9-595b7b524296-kube-api-access-8f7b9\") pod \"calico-apiserver-bfb846b8b-c7k55\" (UID: \"882830d1-c02f-4c0f-bbc9-595b7b524296\") " pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" Sep 12 23:49:24.154635 kubelet[2637]: I0912 23:49:24.154622 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfphq\" (UniqueName: \"kubernetes.io/projected/5c790fd4-d385-4fdb-9724-a15ea9c3127a-kube-api-access-cfphq\") pod \"calico-kube-controllers-b974db775-2dqk7\" (UID: \"5c790fd4-d385-4fdb-9724-a15ea9c3127a\") " pod="calico-system/calico-kube-controllers-b974db775-2dqk7" Sep 12 23:49:24.154713 kubelet[2637]: I0912 23:49:24.154701 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/072481d6-9cb5-4023-9a42-06a2409e0fc7-config-volume\") pod \"coredns-7c65d6cfc9-kxdqx\" (UID: \"072481d6-9cb5-4023-9a42-06a2409e0fc7\") " pod="kube-system/coredns-7c65d6cfc9-kxdqx" Sep 12 23:49:24.154796 kubelet[2637]: I0912 23:49:24.154783 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmgm\" (UniqueName: \"kubernetes.io/projected/06064bbe-14bc-4e90-b7dd-39881485ebd6-kube-api-access-pfmgm\") pod \"goldmane-7988f88666-9xh54\" (UID: \"06064bbe-14bc-4e90-b7dd-39881485ebd6\") " pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.154862 kubelet[2637]: I0912 23:49:24.154851 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk6tk\" (UniqueName: \"kubernetes.io/projected/072481d6-9cb5-4023-9a42-06a2409e0fc7-kube-api-access-tk6tk\") pod \"coredns-7c65d6cfc9-kxdqx\" (UID: \"072481d6-9cb5-4023-9a42-06a2409e0fc7\") " pod="kube-system/coredns-7c65d6cfc9-kxdqx" Sep 12 23:49:24.154932 kubelet[2637]: I0912 23:49:24.154922 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1e734e6c-e281-45ad-9b05-72f8ed11eb0c-config-volume\") pod \"coredns-7c65d6cfc9-67mtl\" (UID: \"1e734e6c-e281-45ad-9b05-72f8ed11eb0c\") " pod="kube-system/coredns-7c65d6cfc9-67mtl" Sep 12 23:49:24.155008 kubelet[2637]: I0912 23:49:24.154996 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-backend-key-pair\") pod \"whisker-6d5b476d9b-2l5bk\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " pod="calico-system/whisker-6d5b476d9b-2l5bk" Sep 12 23:49:24.155088 kubelet[2637]: I0912 23:49:24.155076 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/882830d1-c02f-4c0f-bbc9-595b7b524296-calico-apiserver-certs\") pod \"calico-apiserver-bfb846b8b-c7k55\" (UID: \"882830d1-c02f-4c0f-bbc9-595b7b524296\") " pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" Sep 12 23:49:24.155153 kubelet[2637]: I0912 23:49:24.155142 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ade99a4-314a-447c-b396-ac0956b0fe2d-calico-apiserver-certs\") pod \"calico-apiserver-bfb846b8b-c8q8g\" (UID: \"8ade99a4-314a-447c-b396-ac0956b0fe2d\") " pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" Sep 12 23:49:24.155256 kubelet[2637]: I0912 23:49:24.155241 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-ca-bundle\") pod \"whisker-6d5b476d9b-2l5bk\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " pod="calico-system/whisker-6d5b476d9b-2l5bk" Sep 12 23:49:24.155332 kubelet[2637]: I0912 23:49:24.155321 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06064bbe-14bc-4e90-b7dd-39881485ebd6-goldmane-ca-bundle\") pod \"goldmane-7988f88666-9xh54\" (UID: \"06064bbe-14bc-4e90-b7dd-39881485ebd6\") " pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.155435 kubelet[2637]: I0912 23:49:24.155412 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57722\" (UniqueName: \"kubernetes.io/projected/8ade99a4-314a-447c-b396-ac0956b0fe2d-kube-api-access-57722\") pod \"calico-apiserver-bfb846b8b-c8q8g\" (UID: \"8ade99a4-314a-447c-b396-ac0956b0fe2d\") " pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" Sep 12 23:49:24.155516 kubelet[2637]: I0912 23:49:24.155503 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c790fd4-d385-4fdb-9724-a15ea9c3127a-tigera-ca-bundle\") pod \"calico-kube-controllers-b974db775-2dqk7\" (UID: \"5c790fd4-d385-4fdb-9724-a15ea9c3127a\") " pod="calico-system/calico-kube-controllers-b974db775-2dqk7" Sep 12 23:49:24.159548 systemd[1]: Created slice kubepods-besteffort-pod9872f117_4c4f_416b_90bd_ac2c750933c0.slice - libcontainer container kubepods-besteffort-pod9872f117_4c4f_416b_90bd_ac2c750933c0.slice. Sep 12 23:49:24.164744 systemd[1]: Created slice kubepods-besteffort-pod8ade99a4_314a_447c_b396_ac0956b0fe2d.slice - libcontainer container kubepods-besteffort-pod8ade99a4_314a_447c_b396_ac0956b0fe2d.slice. Sep 12 23:49:24.402054 containerd[1532]: time="2025-09-12T23:49:24.402002842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxdqx,Uid:072481d6-9cb5-4023-9a42-06a2409e0fc7,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:24.414202 containerd[1532]: time="2025-09-12T23:49:24.414021729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c7k55,Uid:882830d1-c02f-4c0f-bbc9-595b7b524296,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:24.437241 containerd[1532]: time="2025-09-12T23:49:24.436966189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-67mtl,Uid:1e734e6c-e281-45ad-9b05-72f8ed11eb0c,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:24.437241 containerd[1532]: time="2025-09-12T23:49:24.436987470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b974db775-2dqk7,Uid:5c790fd4-d385-4fdb-9724-a15ea9c3127a,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:24.454082 containerd[1532]: time="2025-09-12T23:49:24.454043911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9xh54,Uid:06064bbe-14bc-4e90-b7dd-39881485ebd6,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:24.463440 containerd[1532]: time="2025-09-12T23:49:24.463286673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5b476d9b-2l5bk,Uid:9872f117-4c4f-416b-90bd-ac2c750933c0,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:24.468460 containerd[1532]: time="2025-09-12T23:49:24.468424110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c8q8g,Uid:8ade99a4-314a-447c-b396-ac0956b0fe2d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:24.475469 systemd[1]: Created slice kubepods-besteffort-pod842c38e2_26c7_4aa4_8a44_a5b8ff4a773e.slice - libcontainer container kubepods-besteffort-pod842c38e2_26c7_4aa4_8a44_a5b8ff4a773e.slice. Sep 12 23:49:24.482207 containerd[1532]: time="2025-09-12T23:49:24.481758517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrls7,Uid:842c38e2-26c7-4aa4-8a44-a5b8ff4a773e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:24.555602 containerd[1532]: time="2025-09-12T23:49:24.555530169Z" level=error msg="Failed to destroy network for sandbox \"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.557704 containerd[1532]: time="2025-09-12T23:49:24.557548391Z" level=error msg="Failed to destroy network for sandbox \"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.575797 containerd[1532]: time="2025-09-12T23:49:24.575730906Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxdqx,Uid:072481d6-9cb5-4023-9a42-06a2409e0fc7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.575931 containerd[1532]: time="2025-09-12T23:49:24.575900711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:49:24.576808 kubelet[2637]: E0912 23:49:24.576371 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.579960 kubelet[2637]: E0912 23:49:24.579892 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kxdqx" Sep 12 23:49:24.579960 kubelet[2637]: E0912 23:49:24.579958 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kxdqx" Sep 12 23:49:24.580288 kubelet[2637]: E0912 23:49:24.580007 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kxdqx_kube-system(072481d6-9cb5-4023-9a42-06a2409e0fc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kxdqx_kube-system(072481d6-9cb5-4023-9a42-06a2409e0fc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c37a755da9f3e7d34a76b48b23b6bce5513c3fe915e1b888699a9106ba44fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kxdqx" podUID="072481d6-9cb5-4023-9a42-06a2409e0fc7" Sep 12 23:49:24.590403 containerd[1532]: time="2025-09-12T23:49:24.590343832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c7k55,Uid:882830d1-c02f-4c0f-bbc9-595b7b524296,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.590615 kubelet[2637]: E0912 23:49:24.590575 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.590672 kubelet[2637]: E0912 23:49:24.590637 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" Sep 12 23:49:24.590672 kubelet[2637]: E0912 23:49:24.590656 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" Sep 12 23:49:24.590727 kubelet[2637]: E0912 23:49:24.590700 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bfb846b8b-c7k55_calico-apiserver(882830d1-c02f-4c0f-bbc9-595b7b524296)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bfb846b8b-c7k55_calico-apiserver(882830d1-c02f-4c0f-bbc9-595b7b524296)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fafdd98b246678084782b3428993ae821f6be16023e1b6dd8c7dc6064c8b271\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" podUID="882830d1-c02f-4c0f-bbc9-595b7b524296" Sep 12 23:49:24.610996 containerd[1532]: time="2025-09-12T23:49:24.610948861Z" level=error msg="Failed to destroy network for sandbox \"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.612621 containerd[1532]: time="2025-09-12T23:49:24.612534950Z" level=error msg="Failed to destroy network for sandbox \"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.618273 containerd[1532]: time="2025-09-12T23:49:24.618223164Z" level=error msg="Failed to destroy network for sandbox \"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.622275 containerd[1532]: time="2025-09-12T23:49:24.622238486Z" level=error msg="Failed to destroy network for sandbox \"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.624634 containerd[1532]: time="2025-09-12T23:49:24.624595718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-67mtl,Uid:1e734e6c-e281-45ad-9b05-72f8ed11eb0c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.625738 kubelet[2637]: E0912 23:49:24.625289 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.625738 kubelet[2637]: E0912 23:49:24.625354 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-67mtl" Sep 12 23:49:24.625738 kubelet[2637]: E0912 23:49:24.625381 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-67mtl" Sep 12 23:49:24.625871 kubelet[2637]: E0912 23:49:24.625425 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-67mtl_kube-system(1e734e6c-e281-45ad-9b05-72f8ed11eb0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-67mtl_kube-system(1e734e6c-e281-45ad-9b05-72f8ed11eb0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad28f49aa562098c775c90b7c88dcaa7166d02e16f754fe0f0f60972a43760db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-67mtl" podUID="1e734e6c-e281-45ad-9b05-72f8ed11eb0c" Sep 12 23:49:24.633519 containerd[1532]: time="2025-09-12T23:49:24.633470429Z" level=error msg="Failed to destroy network for sandbox \"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.638656 containerd[1532]: time="2025-09-12T23:49:24.638550784Z" level=error msg="Failed to destroy network for sandbox \"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.642431 containerd[1532]: time="2025-09-12T23:49:24.642382501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5b476d9b-2l5bk,Uid:9872f117-4c4f-416b-90bd-ac2c750933c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.642683 kubelet[2637]: E0912 23:49:24.642645 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.642765 kubelet[2637]: E0912 23:49:24.642716 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5b476d9b-2l5bk" Sep 12 23:49:24.642765 kubelet[2637]: E0912 23:49:24.642742 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5b476d9b-2l5bk" Sep 12 23:49:24.642840 kubelet[2637]: E0912 23:49:24.642806 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d5b476d9b-2l5bk_calico-system(9872f117-4c4f-416b-90bd-ac2c750933c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d5b476d9b-2l5bk_calico-system(9872f117-4c4f-416b-90bd-ac2c750933c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c9049f4d272c257ad5da8fc317465a7edd0a4d080e9cbdf0a36d019d87cbe8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5b476d9b-2l5bk" podUID="9872f117-4c4f-416b-90bd-ac2c750933c0" Sep 12 23:49:24.644100 containerd[1532]: time="2025-09-12T23:49:24.644011551Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrls7,Uid:842c38e2-26c7-4aa4-8a44-a5b8ff4a773e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.644608 kubelet[2637]: E0912 23:49:24.644579 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.644763 kubelet[2637]: E0912 23:49:24.644736 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:24.644806 containerd[1532]: time="2025-09-12T23:49:24.644779574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9xh54,Uid:06064bbe-14bc-4e90-b7dd-39881485ebd6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.645099 kubelet[2637]: E0912 23:49:24.644871 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rrls7" Sep 12 23:49:24.645099 kubelet[2637]: E0912 23:49:24.644981 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rrls7_calico-system(842c38e2-26c7-4aa4-8a44-a5b8ff4a773e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rrls7_calico-system(842c38e2-26c7-4aa4-8a44-a5b8ff4a773e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db8effecae6497e34e664729403ecdbf8366407ebcef3450ac291a3147268ff1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rrls7" podUID="842c38e2-26c7-4aa4-8a44-a5b8ff4a773e" Sep 12 23:49:24.645223 kubelet[2637]: E0912 23:49:24.645018 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.645250 kubelet[2637]: E0912 23:49:24.645234 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.645274 kubelet[2637]: E0912 23:49:24.645256 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-9xh54" Sep 12 23:49:24.645417 kubelet[2637]: E0912 23:49:24.645388 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-9xh54_calico-system(06064bbe-14bc-4e90-b7dd-39881485ebd6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-9xh54_calico-system(06064bbe-14bc-4e90-b7dd-39881485ebd6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e23d92886680f7f14ee7be72c515bc0f4faf3069be07980fbcecc9a6ed87001\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-9xh54" podUID="06064bbe-14bc-4e90-b7dd-39881485ebd6" Sep 12 23:49:24.646048 containerd[1532]: time="2025-09-12T23:49:24.646003892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c8q8g,Uid:8ade99a4-314a-447c-b396-ac0956b0fe2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.646277 kubelet[2637]: E0912 23:49:24.646142 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.646277 kubelet[2637]: E0912 23:49:24.646188 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" Sep 12 23:49:24.646277 kubelet[2637]: E0912 23:49:24.646203 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" Sep 12 23:49:24.646543 kubelet[2637]: E0912 23:49:24.646458 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bfb846b8b-c8q8g_calico-apiserver(8ade99a4-314a-447c-b396-ac0956b0fe2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bfb846b8b-c8q8g_calico-apiserver(8ade99a4-314a-447c-b396-ac0956b0fe2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de39d1637dcc304377eeedd1aace9892ea977513b4dcdc074a5d51c3f5651f74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" podUID="8ade99a4-314a-447c-b396-ac0956b0fe2d" Sep 12 23:49:24.647100 containerd[1532]: time="2025-09-12T23:49:24.647055204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b974db775-2dqk7,Uid:5c790fd4-d385-4fdb-9724-a15ea9c3127a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.647343 kubelet[2637]: E0912 23:49:24.647318 2637 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:24.647396 kubelet[2637]: E0912 23:49:24.647355 2637 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b974db775-2dqk7" Sep 12 23:49:24.647396 kubelet[2637]: E0912 23:49:24.647377 2637 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b974db775-2dqk7" Sep 12 23:49:24.647459 kubelet[2637]: E0912 23:49:24.647407 2637 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b974db775-2dqk7_calico-system(5c790fd4-d385-4fdb-9724-a15ea9c3127a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b974db775-2dqk7_calico-system(5c790fd4-d385-4fdb-9724-a15ea9c3127a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67c30da46426b4a74803ab94e5d7f154d8da851ba241d88130b87299aaeec1c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b974db775-2dqk7" podUID="5c790fd4-d385-4fdb-9724-a15ea9c3127a" Sep 12 23:49:25.345461 systemd[1]: run-netns-cni\x2d0d1a9519\x2da4ce\x2d5b4d\x2d0bb3\x2d11a3d8b28558.mount: Deactivated successfully. Sep 12 23:49:25.345567 systemd[1]: run-netns-cni\x2d2cbdb870\x2d50e0\x2d482a\x2da8cd\x2d97eba61c7835.mount: Deactivated successfully. Sep 12 23:49:25.345612 systemd[1]: run-netns-cni\x2ded8f4b97\x2d72b3\x2d5596\x2d4e29\x2d1343bfdefc21.mount: Deactivated successfully. Sep 12 23:49:25.345671 systemd[1]: run-netns-cni\x2d658725a5\x2dcafb\x2d4cb1\x2df47d\x2d94bc36564275.mount: Deactivated successfully. Sep 12 23:49:27.788051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876620604.mount: Deactivated successfully. Sep 12 23:49:28.075922 containerd[1532]: time="2025-09-12T23:49:28.075577493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:49:28.077290 containerd[1532]: time="2025-09-12T23:49:28.077237376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:28.078002 containerd[1532]: time="2025-09-12T23:49:28.077956955Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:28.081886 containerd[1532]: time="2025-09-12T23:49:28.081845897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:28.082696 containerd[1532]: time="2025-09-12T23:49:28.082646558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.506713125s" Sep 12 23:49:28.082696 containerd[1532]: time="2025-09-12T23:49:28.082686199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:49:28.097323 containerd[1532]: time="2025-09-12T23:49:28.097268180Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:49:28.118223 containerd[1532]: time="2025-09-12T23:49:28.117912680Z" level=info msg="Container 6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:28.133409 containerd[1532]: time="2025-09-12T23:49:28.133183720Z" level=info msg="CreateContainer within sandbox \"adabc4d4098d84faa5c3b7fad9b4b0a88e69d56550339db4fa93fc1aa40ce1f2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\"" Sep 12 23:49:28.133881 containerd[1532]: time="2025-09-12T23:49:28.133851017Z" level=info msg="StartContainer for \"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\"" Sep 12 23:49:28.135861 containerd[1532]: time="2025-09-12T23:49:28.135828709Z" level=info msg="connecting to shim 6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b" address="unix:///run/containerd/s/d7eebc575ebbee7a2d2e4e45500a6a2b9fe28a3051821974a67e8b16d0e9c75f" protocol=ttrpc version=3 Sep 12 23:49:28.162365 systemd[1]: Started cri-containerd-6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b.scope - libcontainer container 6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b. Sep 12 23:49:28.203658 containerd[1532]: time="2025-09-12T23:49:28.203616602Z" level=info msg="StartContainer for \"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\" returns successfully" Sep 12 23:49:28.336437 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:49:28.336670 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:49:28.592786 kubelet[2637]: I0912 23:49:28.592732 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sktsw\" (UniqueName: \"kubernetes.io/projected/9872f117-4c4f-416b-90bd-ac2c750933c0-kube-api-access-sktsw\") pod \"9872f117-4c4f-416b-90bd-ac2c750933c0\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " Sep 12 23:49:28.593193 kubelet[2637]: I0912 23:49:28.592785 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-ca-bundle\") pod \"9872f117-4c4f-416b-90bd-ac2c750933c0\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " Sep 12 23:49:28.593193 kubelet[2637]: I0912 23:49:28.592842 2637 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-backend-key-pair\") pod \"9872f117-4c4f-416b-90bd-ac2c750933c0\" (UID: \"9872f117-4c4f-416b-90bd-ac2c750933c0\") " Sep 12 23:49:28.605607 kubelet[2637]: I0912 23:49:28.605556 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9872f117-4c4f-416b-90bd-ac2c750933c0-kube-api-access-sktsw" (OuterVolumeSpecName: "kube-api-access-sktsw") pod "9872f117-4c4f-416b-90bd-ac2c750933c0" (UID: "9872f117-4c4f-416b-90bd-ac2c750933c0"). InnerVolumeSpecName "kube-api-access-sktsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 23:49:28.607617 kubelet[2637]: I0912 23:49:28.607542 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9872f117-4c4f-416b-90bd-ac2c750933c0" (UID: "9872f117-4c4f-416b-90bd-ac2c750933c0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 23:49:28.618634 kubelet[2637]: I0912 23:49:28.618577 2637 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9872f117-4c4f-416b-90bd-ac2c750933c0" (UID: "9872f117-4c4f-416b-90bd-ac2c750933c0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 23:49:28.622954 kubelet[2637]: I0912 23:49:28.622891 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v9d7q" podStartSLOduration=1.659062485 podStartE2EDuration="12.62287353s" podCreationTimestamp="2025-09-12 23:49:16 +0000 UTC" firstStartedPulling="2025-09-12 23:49:17.119687695 +0000 UTC m=+20.771218982" lastFinishedPulling="2025-09-12 23:49:28.08349874 +0000 UTC m=+31.735030027" observedRunningTime="2025-09-12 23:49:28.61255002 +0000 UTC m=+32.264081307" watchObservedRunningTime="2025-09-12 23:49:28.62287353 +0000 UTC m=+32.274404817" Sep 12 23:49:28.694460 kubelet[2637]: I0912 23:49:28.694380 2637 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sktsw\" (UniqueName: \"kubernetes.io/projected/9872f117-4c4f-416b-90bd-ac2c750933c0-kube-api-access-sktsw\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:28.694460 kubelet[2637]: I0912 23:49:28.694415 2637 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:28.694460 kubelet[2637]: I0912 23:49:28.694425 2637 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9872f117-4c4f-416b-90bd-ac2c750933c0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:28.788955 systemd[1]: var-lib-kubelet-pods-9872f117\x2d4c4f\x2d416b\x2d90bd\x2dac2c750933c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsktsw.mount: Deactivated successfully. Sep 12 23:49:28.789053 systemd[1]: var-lib-kubelet-pods-9872f117\x2d4c4f\x2d416b\x2d90bd\x2dac2c750933c0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:49:28.895326 systemd[1]: Removed slice kubepods-besteffort-pod9872f117_4c4f_416b_90bd_ac2c750933c0.slice - libcontainer container kubepods-besteffort-pod9872f117_4c4f_416b_90bd_ac2c750933c0.slice. Sep 12 23:49:28.959325 systemd[1]: Created slice kubepods-besteffort-pod2e7f1314_bdf5_40bb_a5ed_608e090163d2.slice - libcontainer container kubepods-besteffort-pod2e7f1314_bdf5_40bb_a5ed_608e090163d2.slice. Sep 12 23:49:28.996358 kubelet[2637]: I0912 23:49:28.996315 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e7f1314-bdf5-40bb-a5ed-608e090163d2-whisker-ca-bundle\") pod \"whisker-8686fc6c67-4ltb9\" (UID: \"2e7f1314-bdf5-40bb-a5ed-608e090163d2\") " pod="calico-system/whisker-8686fc6c67-4ltb9" Sep 12 23:49:28.996358 kubelet[2637]: I0912 23:49:28.996361 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2e7f1314-bdf5-40bb-a5ed-608e090163d2-whisker-backend-key-pair\") pod \"whisker-8686fc6c67-4ltb9\" (UID: \"2e7f1314-bdf5-40bb-a5ed-608e090163d2\") " pod="calico-system/whisker-8686fc6c67-4ltb9" Sep 12 23:49:28.996523 kubelet[2637]: I0912 23:49:28.996397 2637 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lncb2\" (UniqueName: \"kubernetes.io/projected/2e7f1314-bdf5-40bb-a5ed-608e090163d2-kube-api-access-lncb2\") pod \"whisker-8686fc6c67-4ltb9\" (UID: \"2e7f1314-bdf5-40bb-a5ed-608e090163d2\") " pod="calico-system/whisker-8686fc6c67-4ltb9" Sep 12 23:49:29.262434 containerd[1532]: time="2025-09-12T23:49:29.262327656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8686fc6c67-4ltb9,Uid:2e7f1314-bdf5-40bb-a5ed-608e090163d2,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:29.456391 systemd-networkd[1431]: calif6432933add: Link UP Sep 12 23:49:29.457274 systemd-networkd[1431]: calif6432933add: Gained carrier Sep 12 23:49:29.498880 containerd[1532]: 2025-09-12 23:49:29.283 [INFO][3778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:29.498880 containerd[1532]: 2025-09-12 23:49:29.313 [INFO][3778] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--8686fc6c67--4ltb9-eth0 whisker-8686fc6c67- calico-system 2e7f1314-bdf5-40bb-a5ed-608e090163d2 854 0 2025-09-12 23:49:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8686fc6c67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-8686fc6c67-4ltb9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif6432933add [] [] }} ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-" Sep 12 23:49:29.498880 containerd[1532]: 2025-09-12 23:49:29.313 [INFO][3778] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.498880 containerd[1532]: 2025-09-12 23:49:29.379 [INFO][3793] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" HandleID="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Workload="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.379 [INFO][3793] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" HandleID="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Workload="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d5800), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-8686fc6c67-4ltb9", "timestamp":"2025-09-12 23:49:29.379240166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.379 [INFO][3793] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.379 [INFO][3793] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.379 [INFO][3793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.395 [INFO][3793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" host="localhost" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.404 [INFO][3793] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.409 [INFO][3793] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.410 [INFO][3793] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.412 [INFO][3793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:29.499095 containerd[1532]: 2025-09-12 23:49:29.413 [INFO][3793] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" host="localhost" Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.414 [INFO][3793] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.428 [INFO][3793] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" host="localhost" Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.446 [INFO][3793] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" host="localhost" Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.446 [INFO][3793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" host="localhost" Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.446 [INFO][3793] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:29.499306 containerd[1532]: 2025-09-12 23:49:29.446 [INFO][3793] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" HandleID="k8s-pod-network.3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Workload="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.499413 containerd[1532]: 2025-09-12 23:49:29.449 [INFO][3778] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8686fc6c67--4ltb9-eth0", GenerateName:"whisker-8686fc6c67-", Namespace:"calico-system", SelfLink:"", UID:"2e7f1314-bdf5-40bb-a5ed-608e090163d2", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8686fc6c67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-8686fc6c67-4ltb9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6432933add", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:29.499413 containerd[1532]: 2025-09-12 23:49:29.449 [INFO][3778] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.499520 containerd[1532]: 2025-09-12 23:49:29.450 [INFO][3778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif6432933add ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.499520 containerd[1532]: 2025-09-12 23:49:29.457 [INFO][3778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.499589 containerd[1532]: 2025-09-12 23:49:29.457 [INFO][3778] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--8686fc6c67--4ltb9-eth0", GenerateName:"whisker-8686fc6c67-", Namespace:"calico-system", SelfLink:"", UID:"2e7f1314-bdf5-40bb-a5ed-608e090163d2", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8686fc6c67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c", Pod:"whisker-8686fc6c67-4ltb9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif6432933add", MAC:"4e:86:17:02:38:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:29.499638 containerd[1532]: 2025-09-12 23:49:29.496 [INFO][3778] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" Namespace="calico-system" Pod="whisker-8686fc6c67-4ltb9" WorkloadEndpoint="localhost-k8s-whisker--8686fc6c67--4ltb9-eth0" Sep 12 23:49:29.566847 containerd[1532]: time="2025-09-12T23:49:29.566737897Z" level=info msg="connecting to shim 3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c" address="unix:///run/containerd/s/0f5af6ee3aa78d2a0b89083c1d77cb5cab3f5a44fd0774615b526f3818990dfc" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:29.592720 kubelet[2637]: I0912 23:49:29.592500 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:29.593356 systemd[1]: Started cri-containerd-3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c.scope - libcontainer container 3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c. Sep 12 23:49:29.606798 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:29.629912 containerd[1532]: time="2025-09-12T23:49:29.629859850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8686fc6c67-4ltb9,Uid:2e7f1314-bdf5-40bb-a5ed-608e090163d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c\"" Sep 12 23:49:29.631819 containerd[1532]: time="2025-09-12T23:49:29.631785259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:49:30.472145 kubelet[2637]: I0912 23:49:30.472086 2637 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9872f117-4c4f-416b-90bd-ac2c750933c0" path="/var/lib/kubelet/pods/9872f117-4c4f-416b-90bd-ac2c750933c0/volumes" Sep 12 23:49:30.536244 containerd[1532]: time="2025-09-12T23:49:30.536201333Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:30.537243 containerd[1532]: time="2025-09-12T23:49:30.536668985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:49:30.537705 containerd[1532]: time="2025-09-12T23:49:30.537677249Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:30.540216 containerd[1532]: time="2025-09-12T23:49:30.540180110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:30.540995 containerd[1532]: time="2025-09-12T23:49:30.540970730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 909.14887ms" Sep 12 23:49:30.540995 containerd[1532]: time="2025-09-12T23:49:30.540999490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:49:30.545887 containerd[1532]: time="2025-09-12T23:49:30.545851849Z" level=info msg="CreateContainer within sandbox \"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:49:30.554319 containerd[1532]: time="2025-09-12T23:49:30.554273454Z" level=info msg="Container 23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:30.562420 containerd[1532]: time="2025-09-12T23:49:30.562373291Z" level=info msg="CreateContainer within sandbox \"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18\"" Sep 12 23:49:30.565184 containerd[1532]: time="2025-09-12T23:49:30.563860727Z" level=info msg="StartContainer for \"23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18\"" Sep 12 23:49:30.565184 containerd[1532]: time="2025-09-12T23:49:30.564943714Z" level=info msg="connecting to shim 23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18" address="unix:///run/containerd/s/0f5af6ee3aa78d2a0b89083c1d77cb5cab3f5a44fd0774615b526f3818990dfc" protocol=ttrpc version=3 Sep 12 23:49:30.596354 systemd[1]: Started cri-containerd-23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18.scope - libcontainer container 23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18. Sep 12 23:49:30.657705 containerd[1532]: time="2025-09-12T23:49:30.657655092Z" level=info msg="StartContainer for \"23bda4979960c5ad64a7c1947ebd114393d273ef6a0abea379146843c4608a18\" returns successfully" Sep 12 23:49:30.660834 containerd[1532]: time="2025-09-12T23:49:30.660334878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:49:30.832383 systemd-networkd[1431]: calif6432933add: Gained IPv6LL Sep 12 23:49:32.175764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534593619.mount: Deactivated successfully. Sep 12 23:49:32.192438 containerd[1532]: time="2025-09-12T23:49:32.192381163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:32.192867 containerd[1532]: time="2025-09-12T23:49:32.192841133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:49:32.194396 containerd[1532]: time="2025-09-12T23:49:32.194344368Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:32.196532 containerd[1532]: time="2025-09-12T23:49:32.196486257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:32.197512 containerd[1532]: time="2025-09-12T23:49:32.197019189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.536222539s" Sep 12 23:49:32.197512 containerd[1532]: time="2025-09-12T23:49:32.197076830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:49:32.199588 containerd[1532]: time="2025-09-12T23:49:32.199560167Z" level=info msg="CreateContainer within sandbox \"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:49:32.222781 containerd[1532]: time="2025-09-12T23:49:32.222742095Z" level=info msg="Container f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:32.230784 containerd[1532]: time="2025-09-12T23:49:32.230736877Z" level=info msg="CreateContainer within sandbox \"3869b8385046aea50eafbdfc0d0cff46da672e7562383bdcbd0b389c4717f58c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f\"" Sep 12 23:49:32.231254 containerd[1532]: time="2025-09-12T23:49:32.231228568Z" level=info msg="StartContainer for \"f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f\"" Sep 12 23:49:32.232471 containerd[1532]: time="2025-09-12T23:49:32.232317273Z" level=info msg="connecting to shim f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f" address="unix:///run/containerd/s/0f5af6ee3aa78d2a0b89083c1d77cb5cab3f5a44fd0774615b526f3818990dfc" protocol=ttrpc version=3 Sep 12 23:49:32.256330 systemd[1]: Started cri-containerd-f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f.scope - libcontainer container f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f. Sep 12 23:49:32.289338 containerd[1532]: time="2025-09-12T23:49:32.289295851Z" level=info msg="StartContainer for \"f9521bddd29b3c35b413c69418bc65245f45103babbce1f14d6713c60d4fc71f\" returns successfully" Sep 12 23:49:35.393459 kubelet[2637]: I0912 23:49:35.393361 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-8686fc6c67-4ltb9" podStartSLOduration=4.826458578 podStartE2EDuration="7.393342987s" podCreationTimestamp="2025-09-12 23:49:28 +0000 UTC" firstStartedPulling="2025-09-12 23:49:29.631229445 +0000 UTC m=+33.282760732" lastFinishedPulling="2025-09-12 23:49:32.198113854 +0000 UTC m=+35.849645141" observedRunningTime="2025-09-12 23:49:32.628417056 +0000 UTC m=+36.279948583" watchObservedRunningTime="2025-09-12 23:49:35.393342987 +0000 UTC m=+39.044874274" Sep 12 23:49:35.470473 containerd[1532]: time="2025-09-12T23:49:35.470423828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c8q8g,Uid:8ade99a4-314a-447c-b396-ac0956b0fe2d,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:35.471500 containerd[1532]: time="2025-09-12T23:49:35.470425508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9xh54,Uid:06064bbe-14bc-4e90-b7dd-39881485ebd6,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:35.471832 containerd[1532]: time="2025-09-12T23:49:35.471739495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxdqx,Uid:072481d6-9cb5-4023-9a42-06a2409e0fc7,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:35.660743 systemd-networkd[1431]: calif244dee1d91: Link UP Sep 12 23:49:35.660886 systemd-networkd[1431]: calif244dee1d91: Gained carrier Sep 12 23:49:35.675925 containerd[1532]: 2025-09-12 23:49:35.549 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:35.675925 containerd[1532]: 2025-09-12 23:49:35.573 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0 calico-apiserver-bfb846b8b- calico-apiserver 8ade99a4-314a-447c-b396-ac0956b0fe2d 793 0 2025-09-12 23:49:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bfb846b8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bfb846b8b-c8q8g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif244dee1d91 [] [] }} ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-" Sep 12 23:49:35.675925 containerd[1532]: 2025-09-12 23:49:35.573 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.675925 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" HandleID="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" HandleID="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bfb846b8b-c8q8g", "timestamp":"2025-09-12 23:49:35.611255392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.622 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" host="localhost" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.628 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.633 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.635 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.637 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.676214 containerd[1532]: 2025-09-12 23:49:35.637 [INFO][4216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" host="localhost" Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.641 [INFO][4216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.645 [INFO][4216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" host="localhost" Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" host="localhost" Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" host="localhost" Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:35.676418 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" HandleID="k8s-pod-network.376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.676640 containerd[1532]: 2025-09-12 23:49:35.654 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0", GenerateName:"calico-apiserver-bfb846b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ade99a4-314a-447c-b396-ac0956b0fe2d", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfb846b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bfb846b8b-c8q8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif244dee1d91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.676775 containerd[1532]: 2025-09-12 23:49:35.654 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.676775 containerd[1532]: 2025-09-12 23:49:35.654 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif244dee1d91 ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.676775 containerd[1532]: 2025-09-12 23:49:35.660 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.676875 containerd[1532]: 2025-09-12 23:49:35.661 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0", GenerateName:"calico-apiserver-bfb846b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ade99a4-314a-447c-b396-ac0956b0fe2d", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfb846b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed", Pod:"calico-apiserver-bfb846b8b-c8q8g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif244dee1d91", MAC:"7a:fc:c5:73:c1:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.676941 containerd[1532]: 2025-09-12 23:49:35.673 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c8q8g" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c8q8g-eth0" Sep 12 23:49:35.709723 containerd[1532]: time="2025-09-12T23:49:35.709517352Z" level=info msg="connecting to shim 376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed" address="unix:///run/containerd/s/a5bee6acc07635147668b51e71603da1c98e2fa99c39b3676ebeb1a9862c5c58" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:35.747586 systemd[1]: Started cri-containerd-376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed.scope - libcontainer container 376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed. Sep 12 23:49:35.758479 systemd-networkd[1431]: cali18c8b310216: Link UP Sep 12 23:49:35.760569 systemd-networkd[1431]: cali18c8b310216: Gained carrier Sep 12 23:49:35.775530 containerd[1532]: 2025-09-12 23:49:35.561 [INFO][4176] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:35.775530 containerd[1532]: 2025-09-12 23:49:35.581 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--9xh54-eth0 goldmane-7988f88666- calico-system 06064bbe-14bc-4e90-b7dd-39881485ebd6 792 0 2025-09-12 23:49:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-9xh54 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali18c8b310216 [] [] }} ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-" Sep 12 23:49:35.775530 containerd[1532]: 2025-09-12 23:49:35.581 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.775530 containerd[1532]: 2025-09-12 23:49:35.611 [INFO][4222] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" HandleID="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Workload="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.612 [INFO][4222] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" HandleID="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Workload="localhost-k8s-goldmane--7988f88666--9xh54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb090), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-9xh54", "timestamp":"2025-09-12 23:49:35.611947326 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.612 [INFO][4222] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4222] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.650 [INFO][4222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.723 [INFO][4222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" host="localhost" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.729 [INFO][4222] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.734 [INFO][4222] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.735 [INFO][4222] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.739 [INFO][4222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.775734 containerd[1532]: 2025-09-12 23:49:35.739 [INFO][4222] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" host="localhost" Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.741 [INFO][4222] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525 Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.744 [INFO][4222] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" host="localhost" Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4222] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" host="localhost" Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" host="localhost" Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4222] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:35.775962 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4222] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" HandleID="k8s-pod-network.3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Workload="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.776077 containerd[1532]: 2025-09-12 23:49:35.754 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9xh54-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"06064bbe-14bc-4e90-b7dd-39881485ebd6", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-9xh54", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18c8b310216", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.776077 containerd[1532]: 2025-09-12 23:49:35.754 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.776148 containerd[1532]: 2025-09-12 23:49:35.754 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18c8b310216 ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.776148 containerd[1532]: 2025-09-12 23:49:35.761 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.776221 containerd[1532]: 2025-09-12 23:49:35.761 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--9xh54-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"06064bbe-14bc-4e90-b7dd-39881485ebd6", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525", Pod:"goldmane-7988f88666-9xh54", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali18c8b310216", MAC:"06:ab:3a:5b:2f:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.776271 containerd[1532]: 2025-09-12 23:49:35.773 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" Namespace="calico-system" Pod="goldmane-7988f88666-9xh54" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--9xh54-eth0" Sep 12 23:49:35.778457 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:35.797946 containerd[1532]: time="2025-09-12T23:49:35.797907147Z" level=info msg="connecting to shim 3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525" address="unix:///run/containerd/s/8ad87e2b86173ef6372433fdf42b91170e85b56f8879457d93ca04ea8e22b486" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:35.808348 containerd[1532]: time="2025-09-12T23:49:35.808302483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c8q8g,Uid:8ade99a4-314a-447c-b396-ac0956b0fe2d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed\"" Sep 12 23:49:35.811643 containerd[1532]: time="2025-09-12T23:49:35.811339026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:49:35.832410 systemd[1]: Started cri-containerd-3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525.scope - libcontainer container 3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525. Sep 12 23:49:35.847981 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:35.865594 systemd-networkd[1431]: cali541269ca5eb: Link UP Sep 12 23:49:35.867633 systemd-networkd[1431]: cali541269ca5eb: Gained carrier Sep 12 23:49:35.878183 containerd[1532]: time="2025-09-12T23:49:35.878118612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-9xh54,Uid:06064bbe-14bc-4e90-b7dd-39881485ebd6,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525\"" Sep 12 23:49:35.883260 containerd[1532]: 2025-09-12 23:49:35.549 [INFO][4177] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:35.883260 containerd[1532]: 2025-09-12 23:49:35.566 [INFO][4177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0 coredns-7c65d6cfc9- kube-system 072481d6-9cb5-4023-9a42-06a2409e0fc7 784 0 2025-09-12 23:49:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-kxdqx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali541269ca5eb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-" Sep 12 23:49:35.883260 containerd[1532]: 2025-09-12 23:49:35.566 [INFO][4177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.883260 containerd[1532]: 2025-09-12 23:49:35.616 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" HandleID="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Workload="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.616 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" HandleID="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Workload="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-kxdqx", "timestamp":"2025-09-12 23:49:35.616657424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.617 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.750 [INFO][4210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.826 [INFO][4210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" host="localhost" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.834 [INFO][4210] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.839 [INFO][4210] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.841 [INFO][4210] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.844 [INFO][4210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:35.883514 containerd[1532]: 2025-09-12 23:49:35.844 [INFO][4210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" host="localhost" Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.846 [INFO][4210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.851 [INFO][4210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" host="localhost" Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.857 [INFO][4210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" host="localhost" Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.857 [INFO][4210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" host="localhost" Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.857 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:35.883798 containerd[1532]: 2025-09-12 23:49:35.858 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" HandleID="k8s-pod-network.7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Workload="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.883924 containerd[1532]: 2025-09-12 23:49:35.861 [INFO][4177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"072481d6-9cb5-4023-9a42-06a2409e0fc7", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-kxdqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali541269ca5eb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.883993 containerd[1532]: 2025-09-12 23:49:35.861 [INFO][4177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.883993 containerd[1532]: 2025-09-12 23:49:35.861 [INFO][4177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali541269ca5eb ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.883993 containerd[1532]: 2025-09-12 23:49:35.867 [INFO][4177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.884058 containerd[1532]: 2025-09-12 23:49:35.868 [INFO][4177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"072481d6-9cb5-4023-9a42-06a2409e0fc7", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e", Pod:"coredns-7c65d6cfc9-kxdqx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali541269ca5eb", MAC:"5a:08:44:6d:b2:45", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:35.884058 containerd[1532]: 2025-09-12 23:49:35.880 [INFO][4177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kxdqx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--kxdqx-eth0" Sep 12 23:49:35.912046 containerd[1532]: time="2025-09-12T23:49:35.910189718Z" level=info msg="connecting to shim 7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e" address="unix:///run/containerd/s/abf3c5ba3059aa49918c5636785f0f63569239424bf4e4ff47f82dbae8c220c3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:35.940416 systemd[1]: Started cri-containerd-7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e.scope - libcontainer container 7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e. Sep 12 23:49:35.999323 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:36.028667 containerd[1532]: time="2025-09-12T23:49:36.028609321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kxdqx,Uid:072481d6-9cb5-4023-9a42-06a2409e0fc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e\"" Sep 12 23:49:36.042299 containerd[1532]: time="2025-09-12T23:49:36.042258436Z" level=info msg="CreateContainer within sandbox \"7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:49:36.058362 containerd[1532]: time="2025-09-12T23:49:36.058319880Z" level=info msg="Container 62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:36.069210 containerd[1532]: time="2025-09-12T23:49:36.069140618Z" level=info msg="CreateContainer within sandbox \"7893c55810c75bce121c8269edc8db4b2d8d85ec02bef7fe13c31e6b5344df2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f\"" Sep 12 23:49:36.069810 containerd[1532]: time="2025-09-12T23:49:36.069748670Z" level=info msg="StartContainer for \"62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f\"" Sep 12 23:49:36.070898 containerd[1532]: time="2025-09-12T23:49:36.070852733Z" level=info msg="connecting to shim 62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f" address="unix:///run/containerd/s/abf3c5ba3059aa49918c5636785f0f63569239424bf4e4ff47f82dbae8c220c3" protocol=ttrpc version=3 Sep 12 23:49:36.088398 systemd[1]: Started cri-containerd-62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f.scope - libcontainer container 62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f. Sep 12 23:49:36.139678 containerd[1532]: time="2025-09-12T23:49:36.139447036Z" level=info msg="StartContainer for \"62c86626d9d4fc8c544924f19a6ce4ab323482a881bc59fff34bd26fdb2a7a2f\" returns successfully" Sep 12 23:49:36.363285 systemd-networkd[1431]: vxlan.calico: Link UP Sep 12 23:49:36.363292 systemd-networkd[1431]: vxlan.calico: Gained carrier Sep 12 23:49:36.470634 containerd[1532]: time="2025-09-12T23:49:36.470576276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c7k55,Uid:882830d1-c02f-4c0f-bbc9-595b7b524296,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:36.472727 containerd[1532]: time="2025-09-12T23:49:36.472690838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b974db775-2dqk7,Uid:5c790fd4-d385-4fdb-9724-a15ea9c3127a,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:36.472816 containerd[1532]: time="2025-09-12T23:49:36.472787160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrls7,Uid:842c38e2-26c7-4aa4-8a44-a5b8ff4a773e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:36.647092 kubelet[2637]: I0912 23:49:36.646021 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kxdqx" podStartSLOduration=34.646005254 podStartE2EDuration="34.646005254s" podCreationTimestamp="2025-09-12 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:36.645061715 +0000 UTC m=+40.296593002" watchObservedRunningTime="2025-09-12 23:49:36.646005254 +0000 UTC m=+40.297536541" Sep 12 23:49:36.665112 systemd[1]: Started sshd@7-10.0.0.100:22-10.0.0.1:38502.service - OpenSSH per-connection server daemon (10.0.0.1:38502). Sep 12 23:49:36.670313 systemd-networkd[1431]: calieb967f3228a: Link UP Sep 12 23:49:36.671669 systemd-networkd[1431]: calieb967f3228a: Gained carrier Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.518 [INFO][4528] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0 calico-apiserver-bfb846b8b- calico-apiserver 882830d1-c02f-4c0f-bbc9-595b7b524296 790 0 2025-09-12 23:49:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bfb846b8b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bfb846b8b-c7k55 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieb967f3228a [] [] }} ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.518 [INFO][4528] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.581 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" HandleID="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.581 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" HandleID="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bfb846b8b-c7k55", "timestamp":"2025-09-12 23:49:36.581417712 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.581 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.581 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.581 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.593 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.598 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.608 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.612 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.616 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.616 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.619 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.627 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.642 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.643 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" host="localhost" Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.647 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:36.691486 containerd[1532]: 2025-09-12 23:49:36.647 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" HandleID="k8s-pod-network.3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Workload="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.664 [INFO][4528] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0", GenerateName:"calico-apiserver-bfb846b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882830d1-c02f-4c0f-bbc9-595b7b524296", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfb846b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bfb846b8b-c7k55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb967f3228a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.665 [INFO][4528] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.665 [INFO][4528] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb967f3228a ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.672 [INFO][4528] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.674 [INFO][4528] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0", GenerateName:"calico-apiserver-bfb846b8b-", Namespace:"calico-apiserver", SelfLink:"", UID:"882830d1-c02f-4c0f-bbc9-595b7b524296", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bfb846b8b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c", Pod:"calico-apiserver-bfb846b8b-c7k55", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieb967f3228a", MAC:"8e:9f:08:6e:e5:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.692005 containerd[1532]: 2025-09-12 23:49:36.686 [INFO][4528] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" Namespace="calico-apiserver" Pod="calico-apiserver-bfb846b8b-c7k55" WorkloadEndpoint="localhost-k8s-calico--apiserver--bfb846b8b--c7k55-eth0" Sep 12 23:49:36.752180 sshd[4623]: Accepted publickey for core from 10.0.0.1 port 38502 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:36.753939 sshd-session[4623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:36.765519 containerd[1532]: time="2025-09-12T23:49:36.765466304Z" level=info msg="connecting to shim 3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c" address="unix:///run/containerd/s/6b62af321c7c12a6a0b531a35302c14481356776343eb77561c9a6fdc3cb3c5c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:36.769835 systemd-logind[1503]: New session 8 of user core. Sep 12 23:49:36.775505 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:49:36.778351 systemd-networkd[1431]: cali85d3dd41ab2: Link UP Sep 12 23:49:36.780971 systemd-networkd[1431]: cali85d3dd41ab2: Gained carrier Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.571 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0 calico-kube-controllers-b974db775- calico-system 5c790fd4-d385-4fdb-9724-a15ea9c3127a 796 0 2025-09-12 23:49:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b974db775 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-b974db775-2dqk7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali85d3dd41ab2 [] [] }} ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.571 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.602 [INFO][4589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" HandleID="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Workload="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.602 [INFO][4589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" HandleID="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Workload="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-b974db775-2dqk7", "timestamp":"2025-09-12 23:49:36.602765342 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.603 [INFO][4589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.643 [INFO][4589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.644 [INFO][4589] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.701 [INFO][4589] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.709 [INFO][4589] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.721 [INFO][4589] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.733 [INFO][4589] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.737 [INFO][4589] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.737 [INFO][4589] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.741 [INFO][4589] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1 Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.756 [INFO][4589] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4589] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4589] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" host="localhost" Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:36.806213 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" HandleID="k8s-pod-network.b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Workload="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.770 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0", GenerateName:"calico-kube-controllers-b974db775-", Namespace:"calico-system", SelfLink:"", UID:"5c790fd4-d385-4fdb-9724-a15ea9c3127a", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b974db775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-b974db775-2dqk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85d3dd41ab2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.772 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.772 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85d3dd41ab2 ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.780 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.782 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0", GenerateName:"calico-kube-controllers-b974db775-", Namespace:"calico-system", SelfLink:"", UID:"5c790fd4-d385-4fdb-9724-a15ea9c3127a", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b974db775", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1", Pod:"calico-kube-controllers-b974db775-2dqk7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali85d3dd41ab2", MAC:"f2:a8:28:4c:d3:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.806746 containerd[1532]: 2025-09-12 23:49:36.803 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" Namespace="calico-system" Pod="calico-kube-controllers-b974db775-2dqk7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--b974db775--2dqk7-eth0" Sep 12 23:49:36.819437 systemd[1]: Started cri-containerd-3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c.scope - libcontainer container 3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c. Sep 12 23:49:36.832277 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:36.848336 systemd-networkd[1431]: cali18c8b310216: Gained IPv6LL Sep 12 23:49:36.861543 systemd-networkd[1431]: cali2cb37c7178c: Link UP Sep 12 23:49:36.864933 systemd-networkd[1431]: cali2cb37c7178c: Gained carrier Sep 12 23:49:36.868084 containerd[1532]: time="2025-09-12T23:49:36.868030453Z" level=info msg="connecting to shim b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1" address="unix:///run/containerd/s/e9ab9701fcf3827254110b9cb6c1af0d4ef6a1e459190a0b5e8486e356c827bb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.557 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rrls7-eth0 csi-node-driver- calico-system 842c38e2-26c7-4aa4-8a44-a5b8ff4a773e 689 0 2025-09-12 23:49:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rrls7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2cb37c7178c [] [] }} ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.557 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.616 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" HandleID="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Workload="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.616 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" HandleID="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Workload="localhost-k8s-csi--node--driver--rrls7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004ce50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rrls7", "timestamp":"2025-09-12 23:49:36.616719144 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.616 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.765 [INFO][4581] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.797 [INFO][4581] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.811 [INFO][4581] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.817 [INFO][4581] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.820 [INFO][4581] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.823 [INFO][4581] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.823 [INFO][4581] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.825 [INFO][4581] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.834 [INFO][4581] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.844 [INFO][4581] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.844 [INFO][4581] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" host="localhost" Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.844 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:36.887316 containerd[1532]: 2025-09-12 23:49:36.844 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" HandleID="k8s-pod-network.cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Workload="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.856 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrls7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rrls7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cb37c7178c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.856 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.856 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cb37c7178c ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.864 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.867 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rrls7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"842c38e2-26c7-4aa4-8a44-a5b8ff4a773e", ResourceVersion:"689", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d", Pod:"csi-node-driver-rrls7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2cb37c7178c", MAC:"2e:8f:c0:65:23:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:36.887967 containerd[1532]: 2025-09-12 23:49:36.879 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" Namespace="calico-system" Pod="csi-node-driver-rrls7" WorkloadEndpoint="localhost-k8s-csi--node--driver--rrls7-eth0" Sep 12 23:49:36.892144 containerd[1532]: time="2025-09-12T23:49:36.892107099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bfb846b8b-c7k55,Uid:882830d1-c02f-4c0f-bbc9-595b7b524296,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c\"" Sep 12 23:49:36.923835 containerd[1532]: time="2025-09-12T23:49:36.923647375Z" level=info msg="connecting to shim cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d" address="unix:///run/containerd/s/5de46142ce9036f86a0b85357fd5c6e41ff9391464baa096205cb37f8b2ce780" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:36.948336 systemd[1]: Started cri-containerd-b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1.scope - libcontainer container b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1. Sep 12 23:49:36.969407 systemd[1]: Started cri-containerd-cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d.scope - libcontainer container cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d. Sep 12 23:49:36.985585 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:37.000093 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:37.005669 containerd[1532]: time="2025-09-12T23:49:37.004531604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rrls7,Uid:842c38e2-26c7-4aa4-8a44-a5b8ff4a773e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d\"" Sep 12 23:49:37.049188 containerd[1532]: time="2025-09-12T23:49:37.049134759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b974db775-2dqk7,Uid:5c790fd4-d385-4fdb-9724-a15ea9c3127a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1\"" Sep 12 23:49:37.071101 sshd[4668]: Connection closed by 10.0.0.1 port 38502 Sep 12 23:49:37.071552 sshd-session[4623]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:37.076202 systemd[1]: sshd@7-10.0.0.100:22-10.0.0.1:38502.service: Deactivated successfully. Sep 12 23:49:37.078843 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:49:37.080147 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:49:37.081951 systemd-logind[1503]: Removed session 8. Sep 12 23:49:37.168344 systemd-networkd[1431]: cali541269ca5eb: Gained IPv6LL Sep 12 23:49:37.360608 systemd-networkd[1431]: calif244dee1d91: Gained IPv6LL Sep 12 23:49:37.380428 containerd[1532]: time="2025-09-12T23:49:37.379902568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:37.380428 containerd[1532]: time="2025-09-12T23:49:37.380350817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:49:37.381236 containerd[1532]: time="2025-09-12T23:49:37.381210074Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:37.383617 containerd[1532]: time="2025-09-12T23:49:37.383591361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:37.384192 containerd[1532]: time="2025-09-12T23:49:37.384114211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.572739745s" Sep 12 23:49:37.384192 containerd[1532]: time="2025-09-12T23:49:37.384146771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:49:37.385263 containerd[1532]: time="2025-09-12T23:49:37.385206592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:49:37.387213 containerd[1532]: time="2025-09-12T23:49:37.387183911Z" level=info msg="CreateContainer within sandbox \"376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:49:37.392889 containerd[1532]: time="2025-09-12T23:49:37.392860382Z" level=info msg="Container c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:37.398866 containerd[1532]: time="2025-09-12T23:49:37.398740378Z" level=info msg="CreateContainer within sandbox \"376eee38535063b99831591b4dbb493f6acaf79a61eda005cfcc198ee89be5ed\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883\"" Sep 12 23:49:37.399267 containerd[1532]: time="2025-09-12T23:49:37.399237308Z" level=info msg="StartContainer for \"c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883\"" Sep 12 23:49:37.400270 containerd[1532]: time="2025-09-12T23:49:37.400243607Z" level=info msg="connecting to shim c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883" address="unix:///run/containerd/s/a5bee6acc07635147668b51e71603da1c98e2fa99c39b3676ebeb1a9862c5c58" protocol=ttrpc version=3 Sep 12 23:49:37.423345 systemd[1]: Started cri-containerd-c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883.scope - libcontainer container c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883. Sep 12 23:49:37.458059 containerd[1532]: time="2025-09-12T23:49:37.457994700Z" level=info msg="StartContainer for \"c9faa9fadf67e99fb2505324785e234d1d1ecd889299986a31d725d57de66883\" returns successfully" Sep 12 23:49:37.651198 kubelet[2637]: I0912 23:49:37.650971 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bfb846b8b-c8q8g" podStartSLOduration=25.076007658 podStartE2EDuration="26.650089349s" podCreationTimestamp="2025-09-12 23:49:11 +0000 UTC" firstStartedPulling="2025-09-12 23:49:35.810972538 +0000 UTC m=+39.462503825" lastFinishedPulling="2025-09-12 23:49:37.385054229 +0000 UTC m=+41.036585516" observedRunningTime="2025-09-12 23:49:37.648487277 +0000 UTC m=+41.300018564" watchObservedRunningTime="2025-09-12 23:49:37.650089349 +0000 UTC m=+41.301620636" Sep 12 23:49:38.193666 systemd-networkd[1431]: cali2cb37c7178c: Gained IPv6LL Sep 12 23:49:38.263238 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Sep 12 23:49:38.322055 systemd-networkd[1431]: calieb967f3228a: Gained IPv6LL Sep 12 23:49:38.642306 kubelet[2637]: I0912 23:49:38.642219 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:38.768594 systemd-networkd[1431]: cali85d3dd41ab2: Gained IPv6LL Sep 12 23:49:38.809438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585812984.mount: Deactivated successfully. Sep 12 23:49:39.284408 containerd[1532]: time="2025-09-12T23:49:39.284251043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:39.286021 containerd[1532]: time="2025-09-12T23:49:39.285981435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:49:39.287794 containerd[1532]: time="2025-09-12T23:49:39.287563664Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:39.291296 containerd[1532]: time="2025-09-12T23:49:39.291244813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:39.292036 containerd[1532]: time="2025-09-12T23:49:39.291981387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 1.906745993s" Sep 12 23:49:39.292036 containerd[1532]: time="2025-09-12T23:49:39.292019227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:49:39.293697 containerd[1532]: time="2025-09-12T23:49:39.293549096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:49:39.298557 containerd[1532]: time="2025-09-12T23:49:39.296646873Z" level=info msg="CreateContainer within sandbox \"3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:49:39.306191 containerd[1532]: time="2025-09-12T23:49:39.306133610Z" level=info msg="Container 785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:39.317515 containerd[1532]: time="2025-09-12T23:49:39.317472421Z" level=info msg="CreateContainer within sandbox \"3d1253cfa9d0b084acdfdf0bba03c9ec47b610c0df8b4413122df32466bef525\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\"" Sep 12 23:49:39.318191 containerd[1532]: time="2025-09-12T23:49:39.318148794Z" level=info msg="StartContainer for \"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\"" Sep 12 23:49:39.320704 containerd[1532]: time="2025-09-12T23:49:39.320663440Z" level=info msg="connecting to shim 785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431" address="unix:///run/containerd/s/8ad87e2b86173ef6372433fdf42b91170e85b56f8879457d93ca04ea8e22b486" protocol=ttrpc version=3 Sep 12 23:49:39.350395 systemd[1]: Started cri-containerd-785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431.scope - libcontainer container 785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431. Sep 12 23:49:39.400291 containerd[1532]: time="2025-09-12T23:49:39.400244921Z" level=info msg="StartContainer for \"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\" returns successfully" Sep 12 23:49:39.535087 containerd[1532]: time="2025-09-12T23:49:39.534881027Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:39.536183 containerd[1532]: time="2025-09-12T23:49:39.536030569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:49:39.538288 containerd[1532]: time="2025-09-12T23:49:39.538143808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 244.560832ms" Sep 12 23:49:39.538288 containerd[1532]: time="2025-09-12T23:49:39.538200169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:49:39.542027 containerd[1532]: time="2025-09-12T23:49:39.541929358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:49:39.548071 containerd[1532]: time="2025-09-12T23:49:39.547999431Z" level=info msg="CreateContainer within sandbox \"3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:49:39.557718 containerd[1532]: time="2025-09-12T23:49:39.557668371Z" level=info msg="Container 135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:39.569534 containerd[1532]: time="2025-09-12T23:49:39.569409230Z" level=info msg="CreateContainer within sandbox \"3534a3fbb73e5c3cd41428c241db5b5a838eb3685935af76fcfee15a96bf149c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0\"" Sep 12 23:49:39.570193 containerd[1532]: time="2025-09-12T23:49:39.570147804Z" level=info msg="StartContainer for \"135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0\"" Sep 12 23:49:39.571521 containerd[1532]: time="2025-09-12T23:49:39.571488909Z" level=info msg="connecting to shim 135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0" address="unix:///run/containerd/s/6b62af321c7c12a6a0b531a35302c14481356776343eb77561c9a6fdc3cb3c5c" protocol=ttrpc version=3 Sep 12 23:49:39.595387 systemd[1]: Started cri-containerd-135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0.scope - libcontainer container 135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0. Sep 12 23:49:39.637402 containerd[1532]: time="2025-09-12T23:49:39.637341974Z" level=info msg="StartContainer for \"135d8d0e6da3d50093124d48a42400fc712a3bb527b423dbce1ca873691e18c0\" returns successfully" Sep 12 23:49:39.672075 kubelet[2637]: I0912 23:49:39.670474 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-9xh54" podStartSLOduration=20.258812144 podStartE2EDuration="23.67045491s" podCreationTimestamp="2025-09-12 23:49:16 +0000 UTC" firstStartedPulling="2025-09-12 23:49:35.881289918 +0000 UTC m=+39.532821205" lastFinishedPulling="2025-09-12 23:49:39.292932684 +0000 UTC m=+42.944463971" observedRunningTime="2025-09-12 23:49:39.668234269 +0000 UTC m=+43.319765596" watchObservedRunningTime="2025-09-12 23:49:39.67045491 +0000 UTC m=+43.321986157" Sep 12 23:49:40.471504 containerd[1532]: time="2025-09-12T23:49:40.471462524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-67mtl,Uid:1e734e6c-e281-45ad-9b05-72f8ed11eb0c,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:40.598279 containerd[1532]: time="2025-09-12T23:49:40.597591734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.599065 containerd[1532]: time="2025-09-12T23:49:40.599036240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:49:40.599907 containerd[1532]: time="2025-09-12T23:49:40.599873255Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.617727 containerd[1532]: time="2025-09-12T23:49:40.617663178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.618359 containerd[1532]: time="2025-09-12T23:49:40.618332950Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.07630055s" Sep 12 23:49:40.618429 containerd[1532]: time="2025-09-12T23:49:40.618365791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:49:40.619983 containerd[1532]: time="2025-09-12T23:49:40.619937539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:49:40.623535 containerd[1532]: time="2025-09-12T23:49:40.623482044Z" level=info msg="CreateContainer within sandbox \"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:49:40.636605 containerd[1532]: time="2025-09-12T23:49:40.636561801Z" level=info msg="Container 2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:40.653060 containerd[1532]: time="2025-09-12T23:49:40.652618773Z" level=info msg="CreateContainer within sandbox \"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528\"" Sep 12 23:49:40.653912 containerd[1532]: time="2025-09-12T23:49:40.653457708Z" level=info msg="StartContainer for \"2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528\"" Sep 12 23:49:40.655677 containerd[1532]: time="2025-09-12T23:49:40.655645228Z" level=info msg="connecting to shim 2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528" address="unix:///run/containerd/s/5de46142ce9036f86a0b85357fd5c6e41ff9391464baa096205cb37f8b2ce780" protocol=ttrpc version=3 Sep 12 23:49:40.663344 kubelet[2637]: I0912 23:49:40.663192 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:40.688603 systemd-networkd[1431]: califf0e501a006: Link UP Sep 12 23:49:40.689706 systemd-networkd[1431]: califf0e501a006: Gained carrier Sep 12 23:49:40.709428 kubelet[2637]: I0912 23:49:40.709158 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bfb846b8b-c7k55" podStartSLOduration=27.064732004 podStartE2EDuration="29.709136759s" podCreationTimestamp="2025-09-12 23:49:11 +0000 UTC" firstStartedPulling="2025-09-12 23:49:36.894688231 +0000 UTC m=+40.546219478" lastFinishedPulling="2025-09-12 23:49:39.539092946 +0000 UTC m=+43.190624233" observedRunningTime="2025-09-12 23:49:39.684842178 +0000 UTC m=+43.336373465" watchObservedRunningTime="2025-09-12 23:49:40.709136759 +0000 UTC m=+44.360668046" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.529 [INFO][4959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0 coredns-7c65d6cfc9- kube-system 1e734e6c-e281-45ad-9b05-72f8ed11eb0c 794 0 2025-09-12 23:49:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-67mtl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califf0e501a006 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.531 [INFO][4959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.604 [INFO][4974] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" HandleID="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Workload="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.605 [INFO][4974] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" HandleID="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Workload="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b55d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-67mtl", "timestamp":"2025-09-12 23:49:40.604964588 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.605 [INFO][4974] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.605 [INFO][4974] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.605 [INFO][4974] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.617 [INFO][4974] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.626 [INFO][4974] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.634 [INFO][4974] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.641 [INFO][4974] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.645 [INFO][4974] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.645 [INFO][4974] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.647 [INFO][4974] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992 Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.654 [INFO][4974] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.673 [INFO][4974] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.674 [INFO][4974] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" host="localhost" Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.674 [INFO][4974] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:40.720747 containerd[1532]: 2025-09-12 23:49:40.678 [INFO][4974] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" HandleID="k8s-pod-network.90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Workload="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.683 [INFO][4959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e734e6c-e281-45ad-9b05-72f8ed11eb0c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-67mtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf0e501a006", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.683 [INFO][4959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.684 [INFO][4959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf0e501a006 ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.691 [INFO][4959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.692 [INFO][4959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"1e734e6c-e281-45ad-9b05-72f8ed11eb0c", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992", Pod:"coredns-7c65d6cfc9-67mtl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califf0e501a006", MAC:"ee:eb:e6:2a:e5:ac", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:40.722026 containerd[1532]: 2025-09-12 23:49:40.712 [INFO][4959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" Namespace="kube-system" Pod="coredns-7c65d6cfc9-67mtl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--67mtl-eth0" Sep 12 23:49:40.726988 systemd[1]: Started cri-containerd-2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528.scope - libcontainer container 2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528. Sep 12 23:49:40.759503 containerd[1532]: time="2025-09-12T23:49:40.759350870Z" level=info msg="connecting to shim 90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992" address="unix:///run/containerd/s/11a55eb13b2e851ab1ce866947d74f752981a2f2e301a9fb022b83e33798c95b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:40.788958 containerd[1532]: time="2025-09-12T23:49:40.788907847Z" level=info msg="StartContainer for \"2a86aa3de6ba528f10579227fa5241b86596017862b8948fedbe7d3429296528\" returns successfully" Sep 12 23:49:40.804400 systemd[1]: Started cri-containerd-90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992.scope - libcontainer container 90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992. Sep 12 23:49:40.823041 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:40.848154 containerd[1532]: time="2025-09-12T23:49:40.848106242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-67mtl,Uid:1e734e6c-e281-45ad-9b05-72f8ed11eb0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992\"" Sep 12 23:49:40.852553 containerd[1532]: time="2025-09-12T23:49:40.852404640Z" level=info msg="CreateContainer within sandbox \"90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:49:40.884997 containerd[1532]: time="2025-09-12T23:49:40.884347260Z" level=info msg="Container 2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:40.888145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301253161.mount: Deactivated successfully. Sep 12 23:49:40.894667 containerd[1532]: time="2025-09-12T23:49:40.894627166Z" level=info msg="CreateContainer within sandbox \"90787750c632ff5c48bbc384ea2ce9cad53be76803458fe60ed18a3b6c2f1992\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288\"" Sep 12 23:49:40.895433 containerd[1532]: time="2025-09-12T23:49:40.895405620Z" level=info msg="StartContainer for \"2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288\"" Sep 12 23:49:40.896298 containerd[1532]: time="2025-09-12T23:49:40.896270836Z" level=info msg="connecting to shim 2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288" address="unix:///run/containerd/s/11a55eb13b2e851ab1ce866947d74f752981a2f2e301a9fb022b83e33798c95b" protocol=ttrpc version=3 Sep 12 23:49:40.921340 systemd[1]: Started cri-containerd-2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288.scope - libcontainer container 2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288. Sep 12 23:49:40.932655 containerd[1532]: time="2025-09-12T23:49:40.932591336Z" level=info msg="TaskExit event in podsandbox handler container_id:\"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\" id:\"31ea9f018894ebd567aa27885db8170948c2ab119129608721a4be1de50bfa40\" pid:5070 exit_status:1 exited_at:{seconds:1757720980 nanos:919560619}" Sep 12 23:49:40.957773 containerd[1532]: time="2025-09-12T23:49:40.957716472Z" level=info msg="StartContainer for \"2e3f78ea556433a93bb51c8b63ed51b150277fe692758e817968e6058fc0b288\" returns successfully" Sep 12 23:49:41.769313 containerd[1532]: time="2025-09-12T23:49:41.769258917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\" id:\"42c561d610e3bb8cb4b8ddbb2605b2b0dd38c8b7382141221f31fea4de66c8ed\" pid:5147 exit_status:1 exited_at:{seconds:1757720981 nanos:767497486}" Sep 12 23:49:41.815990 kubelet[2637]: I0912 23:49:41.809014 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:41.844326 kubelet[2637]: I0912 23:49:41.844217 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-67mtl" podStartSLOduration=39.844197285 podStartE2EDuration="39.844197285s" podCreationTimestamp="2025-09-12 23:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:41.704088001 +0000 UTC m=+45.355619288" watchObservedRunningTime="2025-09-12 23:49:41.844197285 +0000 UTC m=+45.495728572" Sep 12 23:49:42.084090 systemd[1]: Started sshd@8-10.0.0.100:22-10.0.0.1:48228.service - OpenSSH per-connection server daemon (10.0.0.1:48228). Sep 12 23:49:42.097289 systemd-networkd[1431]: califf0e501a006: Gained IPv6LL Sep 12 23:49:42.177631 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 48228 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:42.182078 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:42.189762 systemd-logind[1503]: New session 9 of user core. Sep 12 23:49:42.197401 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:49:42.457319 sshd[5173]: Connection closed by 10.0.0.1 port 48228 Sep 12 23:49:42.456678 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:42.461343 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:49:42.461615 systemd[1]: sshd@8-10.0.0.100:22-10.0.0.1:48228.service: Deactivated successfully. Sep 12 23:49:42.465787 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:49:42.469417 systemd-logind[1503]: Removed session 9. Sep 12 23:49:42.608105 kubelet[2637]: I0912 23:49:42.608063 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:42.730925 containerd[1532]: time="2025-09-12T23:49:42.730385542Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\" id:\"73c450e6e27ab089b2703a019608e3018bac81f3bf9878e7d88791b3113fb082\" pid:5199 exited_at:{seconds:1757720982 nanos:729945295}" Sep 12 23:49:42.827115 containerd[1532]: time="2025-09-12T23:49:42.827060497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:42.828377 containerd[1532]: time="2025-09-12T23:49:42.827965673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:49:42.830373 containerd[1532]: time="2025-09-12T23:49:42.830330154Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:42.833348 containerd[1532]: time="2025-09-12T23:49:42.833304286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:42.833924 containerd[1532]: time="2025-09-12T23:49:42.833892656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.213914116s" Sep 12 23:49:42.834015 containerd[1532]: time="2025-09-12T23:49:42.833925976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:49:42.835334 containerd[1532]: time="2025-09-12T23:49:42.835296280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:49:42.843754 containerd[1532]: time="2025-09-12T23:49:42.843701746Z" level=info msg="CreateContainer within sandbox \"b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:49:42.856212 containerd[1532]: time="2025-09-12T23:49:42.856157562Z" level=info msg="Container 2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:42.865832 containerd[1532]: time="2025-09-12T23:49:42.865607605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\" id:\"d9950fb220bc8602ca02a662857d3648fcd2908ee46853da44f1bdb41aa82814\" pid:5227 exited_at:{seconds:1757720982 nanos:864526467}" Sep 12 23:49:42.867005 containerd[1532]: time="2025-09-12T23:49:42.866953109Z" level=info msg="CreateContainer within sandbox \"b46b4d5852ff70526f8f5482986c3378d433dfefe8de5967e2cdb3f3607b65d1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2\"" Sep 12 23:49:42.868180 containerd[1532]: time="2025-09-12T23:49:42.868016607Z" level=info msg="StartContainer for \"2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2\"" Sep 12 23:49:42.869480 containerd[1532]: time="2025-09-12T23:49:42.869301949Z" level=info msg="connecting to shim 2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2" address="unix:///run/containerd/s/e9ab9701fcf3827254110b9cb6c1af0d4ef6a1e459190a0b5e8486e356c827bb" protocol=ttrpc version=3 Sep 12 23:49:42.902449 systemd[1]: Started cri-containerd-2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2.scope - libcontainer container 2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2. Sep 12 23:49:42.946487 containerd[1532]: time="2025-09-12T23:49:42.946449486Z" level=info msg="StartContainer for \"2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2\" returns successfully" Sep 12 23:49:43.709006 kubelet[2637]: I0912 23:49:43.708867 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b974db775-2dqk7" podStartSLOduration=20.925598045 podStartE2EDuration="26.708850029s" podCreationTimestamp="2025-09-12 23:49:17 +0000 UTC" firstStartedPulling="2025-09-12 23:49:37.051846013 +0000 UTC m=+40.703377300" lastFinishedPulling="2025-09-12 23:49:42.835098037 +0000 UTC m=+46.486629284" observedRunningTime="2025-09-12 23:49:43.708376261 +0000 UTC m=+47.359907548" watchObservedRunningTime="2025-09-12 23:49:43.708850029 +0000 UTC m=+47.360381276" Sep 12 23:49:44.542312 containerd[1532]: time="2025-09-12T23:49:44.541978361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:44.545812 containerd[1532]: time="2025-09-12T23:49:44.545760543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:49:44.549701 containerd[1532]: time="2025-09-12T23:49:44.549635568Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:44.553062 containerd[1532]: time="2025-09-12T23:49:44.552991023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:44.553714 containerd[1532]: time="2025-09-12T23:49:44.553682515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.718351714s" Sep 12 23:49:44.553829 containerd[1532]: time="2025-09-12T23:49:44.553812677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:49:44.559261 containerd[1532]: time="2025-09-12T23:49:44.559206846Z" level=info msg="CreateContainer within sandbox \"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:49:44.571794 containerd[1532]: time="2025-09-12T23:49:44.571736854Z" level=info msg="Container dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:44.593414 containerd[1532]: time="2025-09-12T23:49:44.593281172Z" level=info msg="CreateContainer within sandbox \"cc40b451f367150b136d047265b4f3afe4033d59da07061f4478b60d24535f1d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024\"" Sep 12 23:49:44.594358 containerd[1532]: time="2025-09-12T23:49:44.594221428Z" level=info msg="StartContainer for \"dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024\"" Sep 12 23:49:44.597256 containerd[1532]: time="2025-09-12T23:49:44.597210437Z" level=info msg="connecting to shim dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024" address="unix:///run/containerd/s/5de46142ce9036f86a0b85357fd5c6e41ff9391464baa096205cb37f8b2ce780" protocol=ttrpc version=3 Sep 12 23:49:44.634366 systemd[1]: Started cri-containerd-dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024.scope - libcontainer container dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024. Sep 12 23:49:44.685982 containerd[1532]: time="2025-09-12T23:49:44.685940230Z" level=info msg="StartContainer for \"dfe0ffec12222379ddc3b2de3dd51e0d696f1de27a773e0cd5783c4bab942024\" returns successfully" Sep 12 23:49:44.759762 containerd[1532]: time="2025-09-12T23:49:44.759715934Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2\" id:\"da7dca47c75653d9ee9d47104d60f6dbcbfe7f3f898fa0ccb42c1b61d2bb5396\" pid:5334 exited_at:{seconds:1757720984 nanos:759444210}" Sep 12 23:49:44.777260 kubelet[2637]: I0912 23:49:44.776686 2637 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rrls7" podStartSLOduration=21.230409125 podStartE2EDuration="28.776637615s" podCreationTimestamp="2025-09-12 23:49:16 +0000 UTC" firstStartedPulling="2025-09-12 23:49:37.008768007 +0000 UTC m=+40.660299294" lastFinishedPulling="2025-09-12 23:49:44.554996497 +0000 UTC m=+48.206527784" observedRunningTime="2025-09-12 23:49:44.717925801 +0000 UTC m=+48.369457088" watchObservedRunningTime="2025-09-12 23:49:44.776637615 +0000 UTC m=+48.428168902" Sep 12 23:49:45.574090 kubelet[2637]: I0912 23:49:45.574031 2637 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:49:45.577709 kubelet[2637]: I0912 23:49:45.577680 2637 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:49:47.473053 systemd[1]: Started sshd@9-10.0.0.100:22-10.0.0.1:48230.service - OpenSSH per-connection server daemon (10.0.0.1:48230). Sep 12 23:49:47.527035 sshd[5356]: Accepted publickey for core from 10.0.0.1 port 48230 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:47.528931 sshd-session[5356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:47.536113 systemd-logind[1503]: New session 10 of user core. Sep 12 23:49:47.546419 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:49:47.715931 sshd[5358]: Connection closed by 10.0.0.1 port 48230 Sep 12 23:49:47.716406 sshd-session[5356]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:47.732946 systemd[1]: sshd@9-10.0.0.100:22-10.0.0.1:48230.service: Deactivated successfully. Sep 12 23:49:47.736194 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:49:47.737773 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:49:47.741656 systemd[1]: Started sshd@10-10.0.0.100:22-10.0.0.1:48232.service - OpenSSH per-connection server daemon (10.0.0.1:48232). Sep 12 23:49:47.742312 systemd-logind[1503]: Removed session 10. Sep 12 23:49:47.800930 sshd[5373]: Accepted publickey for core from 10.0.0.1 port 48232 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:47.803122 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:47.808563 systemd-logind[1503]: New session 11 of user core. Sep 12 23:49:47.826554 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:49:48.072292 sshd[5375]: Connection closed by 10.0.0.1 port 48232 Sep 12 23:49:48.071743 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:48.083710 systemd[1]: sshd@10-10.0.0.100:22-10.0.0.1:48232.service: Deactivated successfully. Sep 12 23:49:48.088066 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:49:48.091193 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:49:48.101446 systemd[1]: Started sshd@11-10.0.0.100:22-10.0.0.1:48240.service - OpenSSH per-connection server daemon (10.0.0.1:48240). Sep 12 23:49:48.104464 systemd-logind[1503]: Removed session 11. Sep 12 23:49:48.152680 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 48240 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:48.153915 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:48.158634 systemd-logind[1503]: New session 12 of user core. Sep 12 23:49:48.172399 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:49:48.245891 kubelet[2637]: I0912 23:49:48.245305 2637 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:48.365149 sshd[5389]: Connection closed by 10.0.0.1 port 48240 Sep 12 23:49:48.365463 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:48.369570 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:49:48.369721 systemd[1]: sshd@11-10.0.0.100:22-10.0.0.1:48240.service: Deactivated successfully. Sep 12 23:49:48.372705 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:49:48.374060 systemd-logind[1503]: Removed session 12. Sep 12 23:49:53.387649 systemd[1]: Started sshd@12-10.0.0.100:22-10.0.0.1:44254.service - OpenSSH per-connection server daemon (10.0.0.1:44254). Sep 12 23:49:53.457576 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 44254 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:53.459379 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:53.470324 systemd-logind[1503]: New session 13 of user core. Sep 12 23:49:53.484497 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:49:53.668193 sshd[5413]: Connection closed by 10.0.0.1 port 44254 Sep 12 23:49:53.670088 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:53.674400 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:49:53.674696 systemd[1]: sshd@12-10.0.0.100:22-10.0.0.1:44254.service: Deactivated successfully. Sep 12 23:49:53.676714 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:49:53.680405 systemd-logind[1503]: Removed session 13. Sep 12 23:49:58.683675 systemd[1]: Started sshd@13-10.0.0.100:22-10.0.0.1:44258.service - OpenSSH per-connection server daemon (10.0.0.1:44258). Sep 12 23:49:58.758568 sshd[5434]: Accepted publickey for core from 10.0.0.1 port 44258 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:58.761595 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:58.771306 systemd-logind[1503]: New session 14 of user core. Sep 12 23:49:58.783457 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:49:58.974678 sshd[5436]: Connection closed by 10.0.0.1 port 44258 Sep 12 23:49:58.974958 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:58.978739 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:49:58.978824 systemd[1]: sshd@13-10.0.0.100:22-10.0.0.1:44258.service: Deactivated successfully. Sep 12 23:49:58.981676 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:49:58.984621 systemd-logind[1503]: Removed session 14. Sep 12 23:50:04.000272 systemd[1]: Started sshd@14-10.0.0.100:22-10.0.0.1:57172.service - OpenSSH per-connection server daemon (10.0.0.1:57172). Sep 12 23:50:04.061802 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 57172 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:04.063906 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:04.069043 systemd-logind[1503]: New session 15 of user core. Sep 12 23:50:04.074368 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:50:04.247338 sshd[5458]: Connection closed by 10.0.0.1 port 57172 Sep 12 23:50:04.246838 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:04.250934 systemd[1]: sshd@14-10.0.0.100:22-10.0.0.1:57172.service: Deactivated successfully. Sep 12 23:50:04.255217 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:50:04.257401 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:50:04.258774 systemd-logind[1503]: Removed session 15. Sep 12 23:50:07.795457 containerd[1532]: time="2025-09-12T23:50:07.795354386Z" level=info msg="TaskExit event in podsandbox handler container_id:\"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\" id:\"e2616e6fe24e241cd346273a1995d70c0d87280964c6bbb23e59878edf1a7c17\" pid:5484 exited_at:{seconds:1757721007 nanos:794790951}" Sep 12 23:50:09.018642 containerd[1532]: time="2025-09-12T23:50:09.018057903Z" level=info msg="TaskExit event in podsandbox handler container_id:\"785d9453e2b32516d5fcd5ce96c92ba5d6c1f4b54dba57e75b5c07d4ee863431\" id:\"65ae3756e2cf5e4fa4a6ae269c85c970b9da6bdde237f0137131e11845606e87\" pid:5508 exited_at:{seconds:1757721009 nanos:17785185}" Sep 12 23:50:09.067890 containerd[1532]: time="2025-09-12T23:50:09.067834432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2732f04e091234a99bf43d9433ca04d5cb1bb3278774c571d6127f714744eff2\" id:\"ca52ef1d427a4e9ce79f8d6edac2216edd344af5cf4c6d845d9678534b4353cd\" pid:5533 exited_at:{seconds:1757721009 nanos:67489715}" Sep 12 23:50:09.266489 systemd[1]: Started sshd@15-10.0.0.100:22-10.0.0.1:57186.service - OpenSSH per-connection server daemon (10.0.0.1:57186). Sep 12 23:50:09.332981 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 57186 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:09.334952 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:09.343686 systemd-logind[1503]: New session 16 of user core. Sep 12 23:50:09.357430 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:50:09.549384 sshd[5548]: Connection closed by 10.0.0.1 port 57186 Sep 12 23:50:09.549956 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:09.562534 systemd[1]: sshd@15-10.0.0.100:22-10.0.0.1:57186.service: Deactivated successfully. Sep 12 23:50:09.564742 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:50:09.565679 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:50:09.568108 systemd[1]: Started sshd@16-10.0.0.100:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Sep 12 23:50:09.571129 systemd-logind[1503]: Removed session 16. Sep 12 23:50:09.638291 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:09.639976 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:09.645619 systemd-logind[1503]: New session 17 of user core. Sep 12 23:50:09.654380 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:50:09.871501 sshd[5563]: Connection closed by 10.0.0.1 port 57192 Sep 12 23:50:09.871984 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:09.882610 systemd[1]: sshd@16-10.0.0.100:22-10.0.0.1:57192.service: Deactivated successfully. Sep 12 23:50:09.885210 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:50:09.886008 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:50:09.888901 systemd[1]: Started sshd@17-10.0.0.100:22-10.0.0.1:57196.service - OpenSSH per-connection server daemon (10.0.0.1:57196). Sep 12 23:50:09.890110 systemd-logind[1503]: Removed session 17. Sep 12 23:50:09.945126 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 57196 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:09.946299 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:09.950822 systemd-logind[1503]: New session 18 of user core. Sep 12 23:50:09.960347 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:50:11.739658 sshd[5576]: Connection closed by 10.0.0.1 port 57196 Sep 12 23:50:11.739953 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:11.753005 systemd[1]: sshd@17-10.0.0.100:22-10.0.0.1:57196.service: Deactivated successfully. Sep 12 23:50:11.757173 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:50:11.757654 systemd[1]: session-18.scope: Consumed 576ms CPU time, 74.3M memory peak. Sep 12 23:50:11.759939 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:50:11.763825 systemd[1]: Started sshd@18-10.0.0.100:22-10.0.0.1:42366.service - OpenSSH per-connection server daemon (10.0.0.1:42366). Sep 12 23:50:11.767442 systemd-logind[1503]: Removed session 18. Sep 12 23:50:11.824009 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 42366 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:11.825748 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:11.830995 systemd-logind[1503]: New session 19 of user core. Sep 12 23:50:11.843386 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:50:12.180996 sshd[5603]: Connection closed by 10.0.0.1 port 42366 Sep 12 23:50:12.180036 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:12.191825 systemd[1]: sshd@18-10.0.0.100:22-10.0.0.1:42366.service: Deactivated successfully. Sep 12 23:50:12.193529 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:50:12.194810 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:50:12.198057 systemd[1]: Started sshd@19-10.0.0.100:22-10.0.0.1:42368.service - OpenSSH per-connection server daemon (10.0.0.1:42368). Sep 12 23:50:12.200483 systemd-logind[1503]: Removed session 19. Sep 12 23:50:12.256724 sshd[5614]: Accepted publickey for core from 10.0.0.1 port 42368 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:12.259325 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:12.268625 systemd-logind[1503]: New session 20 of user core. Sep 12 23:50:12.278396 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:50:12.435756 sshd[5616]: Connection closed by 10.0.0.1 port 42368 Sep 12 23:50:12.436021 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:12.439638 systemd[1]: sshd@19-10.0.0.100:22-10.0.0.1:42368.service: Deactivated successfully. Sep 12 23:50:12.441495 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:50:12.442358 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:50:12.443533 systemd-logind[1503]: Removed session 20. Sep 12 23:50:12.702211 containerd[1532]: time="2025-09-12T23:50:12.701965477Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6fea98c82994e2b87908732c5e3efd6922aa7af9078155df606c6947d9bb9e8b\" id:\"3c849712fda32ef9200129f6b711341565c6a478bc594a7f455247f87d202040\" pid:5641 exited_at:{seconds:1757721012 nanos:701539560}" Sep 12 23:50:17.448083 systemd[1]: Started sshd@20-10.0.0.100:22-10.0.0.1:42376.service - OpenSSH per-connection server daemon (10.0.0.1:42376). Sep 12 23:50:17.500287 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 42376 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:17.501020 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:17.507396 systemd-logind[1503]: New session 21 of user core. Sep 12 23:50:17.519407 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 23:50:17.752537 sshd[5664]: Connection closed by 10.0.0.1 port 42376 Sep 12 23:50:17.752890 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:17.757299 systemd[1]: sshd@20-10.0.0.100:22-10.0.0.1:42376.service: Deactivated successfully. Sep 12 23:50:17.759026 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 23:50:17.760589 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Sep 12 23:50:17.761982 systemd-logind[1503]: Removed session 21. Sep 12 23:50:22.775877 systemd[1]: Started sshd@21-10.0.0.100:22-10.0.0.1:48538.service - OpenSSH per-connection server daemon (10.0.0.1:48538). Sep 12 23:50:22.862873 sshd[5682]: Accepted publickey for core from 10.0.0.1 port 48538 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:22.864477 sshd-session[5682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:22.870089 systemd-logind[1503]: New session 22 of user core. Sep 12 23:50:22.884360 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 23:50:23.064635 sshd[5684]: Connection closed by 10.0.0.1 port 48538 Sep 12 23:50:23.064934 sshd-session[5682]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:23.070469 systemd[1]: sshd@21-10.0.0.100:22-10.0.0.1:48538.service: Deactivated successfully. Sep 12 23:50:23.072700 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 23:50:23.073657 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Sep 12 23:50:23.075660 systemd-logind[1503]: Removed session 22. Sep 12 23:50:28.085422 systemd[1]: Started sshd@22-10.0.0.100:22-10.0.0.1:48552.service - OpenSSH per-connection server daemon (10.0.0.1:48552). Sep 12 23:50:28.140757 sshd[5698]: Accepted publickey for core from 10.0.0.1 port 48552 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:28.142256 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:28.153435 systemd-logind[1503]: New session 23 of user core. Sep 12 23:50:28.161555 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 23:50:28.337745 sshd[5700]: Connection closed by 10.0.0.1 port 48552 Sep 12 23:50:28.338117 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:28.342106 systemd[1]: sshd@22-10.0.0.100:22-10.0.0.1:48552.service: Deactivated successfully. Sep 12 23:50:28.344048 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 23:50:28.346266 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Sep 12 23:50:28.349138 systemd-logind[1503]: Removed session 23.