Sep 11 00:00:50.768195 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 00:00:50.768216 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 11 00:00:50.768226 kernel: KASLR enabled Sep 11 00:00:50.768232 kernel: efi: EFI v2.7 by EDK II Sep 11 00:00:50.768237 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 00:00:50.768243 kernel: random: crng init done Sep 11 00:00:50.768250 kernel: secureboot: Secure boot disabled Sep 11 00:00:50.768255 kernel: ACPI: Early table checksum verification disabled Sep 11 00:00:50.768261 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 00:00:50.768269 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 00:00:50.768275 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768280 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768286 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768292 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768299 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768307 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768313 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768319 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768325 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 00:00:50.768331 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 00:00:50.768337 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 00:00:50.768385 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 00:00:50.768393 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 00:00:50.768399 kernel: Zone ranges: Sep 11 00:00:50.768405 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 00:00:50.768413 kernel: DMA32 empty Sep 11 00:00:50.768420 kernel: Normal empty Sep 11 00:00:50.768425 kernel: Device empty Sep 11 00:00:50.768431 kernel: Movable zone start for each node Sep 11 00:00:50.768437 kernel: Early memory node ranges Sep 11 00:00:50.768443 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 00:00:50.768450 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 00:00:50.768456 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 00:00:50.768462 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 00:00:50.768468 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 00:00:50.768474 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 00:00:50.768480 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 00:00:50.768488 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 00:00:50.768494 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 00:00:50.768500 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 00:00:50.768508 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 00:00:50.768515 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 00:00:50.768521 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 00:00:50.768529 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 00:00:50.768536 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 00:00:50.768542 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 00:00:50.768549 kernel: psci: probing for conduit method from ACPI. Sep 11 00:00:50.768555 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 00:00:50.768562 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 00:00:50.768568 kernel: psci: Trusted OS migration not required Sep 11 00:00:50.768574 kernel: psci: SMC Calling Convention v1.1 Sep 11 00:00:50.768581 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 00:00:50.768587 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 00:00:50.768595 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 00:00:50.768602 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 00:00:50.768608 kernel: Detected PIPT I-cache on CPU0 Sep 11 00:00:50.768614 kernel: CPU features: detected: GIC system register CPU interface Sep 11 00:00:50.768621 kernel: CPU features: detected: Spectre-v4 Sep 11 00:00:50.768627 kernel: CPU features: detected: Spectre-BHB Sep 11 00:00:50.768634 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 00:00:50.768640 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 00:00:50.768646 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 00:00:50.768653 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 00:00:50.768659 kernel: alternatives: applying boot alternatives Sep 11 00:00:50.768674 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 11 00:00:50.768685 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 00:00:50.768691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 00:00:50.768698 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 00:00:50.768704 kernel: Fallback order for Node 0: 0 Sep 11 00:00:50.768711 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 00:00:50.768717 kernel: Policy zone: DMA Sep 11 00:00:50.768723 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 00:00:50.768730 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 00:00:50.768736 kernel: software IO TLB: area num 4. Sep 11 00:00:50.768743 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 00:00:50.768750 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 00:00:50.768757 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 00:00:50.768764 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 00:00:50.768771 kernel: rcu: RCU event tracing is enabled. Sep 11 00:00:50.768778 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 00:00:50.768785 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 00:00:50.768791 kernel: Tracing variant of Tasks RCU enabled. Sep 11 00:00:50.768798 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 00:00:50.768804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 00:00:50.768819 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:00:50.768831 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 00:00:50.768837 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 00:00:50.768845 kernel: GICv3: 256 SPIs implemented Sep 11 00:00:50.768852 kernel: GICv3: 0 Extended SPIs implemented Sep 11 00:00:50.768858 kernel: Root IRQ handler: gic_handle_irq Sep 11 00:00:50.768865 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 00:00:50.768871 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 00:00:50.768878 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 00:00:50.768884 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 00:00:50.768891 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 00:00:50.768898 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 00:00:50.768904 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 00:00:50.768911 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 00:00:50.768917 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 00:00:50.768925 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 00:00:50.768931 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 00:00:50.768938 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 00:00:50.768945 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 00:00:50.768951 kernel: arm-pv: using stolen time PV Sep 11 00:00:50.768958 kernel: Console: colour dummy device 80x25 Sep 11 00:00:50.768964 kernel: ACPI: Core revision 20240827 Sep 11 00:00:50.768971 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 00:00:50.768978 kernel: pid_max: default: 32768 minimum: 301 Sep 11 00:00:50.768984 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 00:00:50.768992 kernel: landlock: Up and running. Sep 11 00:00:50.768999 kernel: SELinux: Initializing. Sep 11 00:00:50.769005 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:00:50.769012 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 00:00:50.769019 kernel: rcu: Hierarchical SRCU implementation. Sep 11 00:00:50.769026 kernel: rcu: Max phase no-delay instances is 400. Sep 11 00:00:50.769032 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 00:00:50.769039 kernel: Remapping and enabling EFI services. Sep 11 00:00:50.769046 kernel: smp: Bringing up secondary CPUs ... Sep 11 00:00:50.769057 kernel: Detected PIPT I-cache on CPU1 Sep 11 00:00:50.769064 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 00:00:50.769071 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 00:00:50.769080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 00:00:50.769087 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 00:00:50.769093 kernel: Detected PIPT I-cache on CPU2 Sep 11 00:00:50.769100 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 00:00:50.769108 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 00:00:50.769116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 00:00:50.769123 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 00:00:50.769130 kernel: Detected PIPT I-cache on CPU3 Sep 11 00:00:50.769137 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 00:00:50.769144 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 00:00:50.769151 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 00:00:50.769157 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 00:00:50.769164 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 00:00:50.769171 kernel: SMP: Total of 4 processors activated. Sep 11 00:00:50.769179 kernel: CPU: All CPU(s) started at EL1 Sep 11 00:00:50.769186 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 00:00:50.769193 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 00:00:50.769200 kernel: CPU features: detected: Common not Private translations Sep 11 00:00:50.769207 kernel: CPU features: detected: CRC32 instructions Sep 11 00:00:50.769214 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 00:00:50.769221 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 00:00:50.769228 kernel: CPU features: detected: LSE atomic instructions Sep 11 00:00:50.769235 kernel: CPU features: detected: Privileged Access Never Sep 11 00:00:50.769243 kernel: CPU features: detected: RAS Extension Support Sep 11 00:00:50.769250 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 00:00:50.769257 kernel: alternatives: applying system-wide alternatives Sep 11 00:00:50.769264 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 00:00:50.769272 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 11 00:00:50.769279 kernel: devtmpfs: initialized Sep 11 00:00:50.769286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 00:00:50.769293 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 00:00:50.769300 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 00:00:50.769308 kernel: 0 pages in range for non-PLT usage Sep 11 00:00:50.769315 kernel: 508560 pages in range for PLT usage Sep 11 00:00:50.769322 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 00:00:50.769329 kernel: SMBIOS 3.0.0 present. Sep 11 00:00:50.769336 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 00:00:50.769361 kernel: DMI: Memory slots populated: 1/1 Sep 11 00:00:50.769369 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 00:00:50.769376 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 00:00:50.769383 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 00:00:50.769392 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 00:00:50.769399 kernel: audit: initializing netlink subsys (disabled) Sep 11 00:00:50.769406 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 11 00:00:50.769413 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 00:00:50.769420 kernel: cpuidle: using governor menu Sep 11 00:00:50.769427 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 00:00:50.769434 kernel: ASID allocator initialised with 32768 entries Sep 11 00:00:50.769441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 00:00:50.769448 kernel: Serial: AMBA PL011 UART driver Sep 11 00:00:50.769457 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 00:00:50.769464 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 00:00:50.769471 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 00:00:50.769478 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 00:00:50.769485 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 00:00:50.769492 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 00:00:50.769499 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 00:00:50.769506 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 00:00:50.769513 kernel: ACPI: Added _OSI(Module Device) Sep 11 00:00:50.769521 kernel: ACPI: Added _OSI(Processor Device) Sep 11 00:00:50.769528 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 00:00:50.769535 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 00:00:50.769542 kernel: ACPI: Interpreter enabled Sep 11 00:00:50.769549 kernel: ACPI: Using GIC for interrupt routing Sep 11 00:00:50.769556 kernel: ACPI: MCFG table detected, 1 entries Sep 11 00:00:50.769563 kernel: ACPI: CPU0 has been hot-added Sep 11 00:00:50.769570 kernel: ACPI: CPU1 has been hot-added Sep 11 00:00:50.769577 kernel: ACPI: CPU2 has been hot-added Sep 11 00:00:50.769583 kernel: ACPI: CPU3 has been hot-added Sep 11 00:00:50.769592 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 00:00:50.769599 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 00:00:50.769606 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 00:00:50.769750 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 00:00:50.769817 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 00:00:50.769879 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 00:00:50.769937 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 00:00:50.770000 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 00:00:50.770009 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 00:00:50.770016 kernel: PCI host bridge to bus 0000:00 Sep 11 00:00:50.770081 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 00:00:50.770135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 00:00:50.770187 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 00:00:50.770239 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 00:00:50.770318 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 00:00:50.770420 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 00:00:50.770497 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 00:00:50.770559 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 00:00:50.770619 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 00:00:50.770690 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 00:00:50.770754 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 00:00:50.770819 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 00:00:50.770873 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 00:00:50.770926 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 00:00:50.770980 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 00:00:50.770989 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 00:00:50.770996 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 00:00:50.771003 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 00:00:50.771012 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 00:00:50.771019 kernel: iommu: Default domain type: Translated Sep 11 00:00:50.771026 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 00:00:50.771033 kernel: efivars: Registered efivars operations Sep 11 00:00:50.771040 kernel: vgaarb: loaded Sep 11 00:00:50.771047 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 00:00:50.771054 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 00:00:50.771061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 00:00:50.771068 kernel: pnp: PnP ACPI init Sep 11 00:00:50.771141 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 00:00:50.771151 kernel: pnp: PnP ACPI: found 1 devices Sep 11 00:00:50.771158 kernel: NET: Registered PF_INET protocol family Sep 11 00:00:50.771165 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 00:00:50.771172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 00:00:50.771179 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 00:00:50.771186 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 00:00:50.771193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 00:00:50.771202 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 00:00:50.771209 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:00:50.771216 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 00:00:50.771223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 00:00:50.771230 kernel: PCI: CLS 0 bytes, default 64 Sep 11 00:00:50.771237 kernel: kvm [1]: HYP mode not available Sep 11 00:00:50.771244 kernel: Initialise system trusted keyrings Sep 11 00:00:50.771251 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 00:00:50.771258 kernel: Key type asymmetric registered Sep 11 00:00:50.771266 kernel: Asymmetric key parser 'x509' registered Sep 11 00:00:50.771273 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 00:00:50.771280 kernel: io scheduler mq-deadline registered Sep 11 00:00:50.771287 kernel: io scheduler kyber registered Sep 11 00:00:50.771294 kernel: io scheduler bfq registered Sep 11 00:00:50.771301 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 00:00:50.771308 kernel: ACPI: button: Power Button [PWRB] Sep 11 00:00:50.771315 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 00:00:50.771404 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 00:00:50.771417 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 00:00:50.771424 kernel: thunder_xcv, ver 1.0 Sep 11 00:00:50.771431 kernel: thunder_bgx, ver 1.0 Sep 11 00:00:50.771438 kernel: nicpf, ver 1.0 Sep 11 00:00:50.771444 kernel: nicvf, ver 1.0 Sep 11 00:00:50.771514 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 00:00:50.771571 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T00:00:50 UTC (1757548850) Sep 11 00:00:50.771580 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 00:00:50.771590 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 00:00:50.771597 kernel: watchdog: NMI not fully supported Sep 11 00:00:50.771604 kernel: watchdog: Hard watchdog permanently disabled Sep 11 00:00:50.771611 kernel: NET: Registered PF_INET6 protocol family Sep 11 00:00:50.771618 kernel: Segment Routing with IPv6 Sep 11 00:00:50.771625 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 00:00:50.771632 kernel: NET: Registered PF_PACKET protocol family Sep 11 00:00:50.771638 kernel: Key type dns_resolver registered Sep 11 00:00:50.771645 kernel: registered taskstats version 1 Sep 11 00:00:50.771652 kernel: Loading compiled-in X.509 certificates Sep 11 00:00:50.771660 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 11 00:00:50.771675 kernel: Demotion targets for Node 0: null Sep 11 00:00:50.771683 kernel: Key type .fscrypt registered Sep 11 00:00:50.771689 kernel: Key type fscrypt-provisioning registered Sep 11 00:00:50.771696 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 00:00:50.771703 kernel: ima: Allocated hash algorithm: sha1 Sep 11 00:00:50.771710 kernel: ima: No architecture policies found Sep 11 00:00:50.771717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 00:00:50.771726 kernel: clk: Disabling unused clocks Sep 11 00:00:50.771733 kernel: PM: genpd: Disabling unused power domains Sep 11 00:00:50.771740 kernel: Warning: unable to open an initial console. Sep 11 00:00:50.771747 kernel: Freeing unused kernel memory: 38976K Sep 11 00:00:50.771754 kernel: Run /init as init process Sep 11 00:00:50.771761 kernel: with arguments: Sep 11 00:00:50.771768 kernel: /init Sep 11 00:00:50.771774 kernel: with environment: Sep 11 00:00:50.771781 kernel: HOME=/ Sep 11 00:00:50.771789 kernel: TERM=linux Sep 11 00:00:50.771796 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 00:00:50.771804 systemd[1]: Successfully made /usr/ read-only. Sep 11 00:00:50.771814 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:00:50.771822 systemd[1]: Detected virtualization kvm. Sep 11 00:00:50.771829 systemd[1]: Detected architecture arm64. Sep 11 00:00:50.771836 systemd[1]: Running in initrd. Sep 11 00:00:50.771843 systemd[1]: No hostname configured, using default hostname. Sep 11 00:00:50.771853 systemd[1]: Hostname set to . Sep 11 00:00:50.771860 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:00:50.771867 systemd[1]: Queued start job for default target initrd.target. Sep 11 00:00:50.771875 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:00:50.771882 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:00:50.771891 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 00:00:50.771898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:00:50.771906 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 00:00:50.771916 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 00:00:50.771924 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 00:00:50.771932 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 00:00:50.771939 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:00:50.771947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:00:50.771954 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:00:50.771963 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:00:50.771970 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:00:50.771978 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:00:50.771985 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:00:50.771993 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:00:50.772001 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 00:00:50.772008 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 00:00:50.772016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:00:50.772024 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:00:50.772033 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:00:50.772040 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:00:50.772051 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 00:00:50.772058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:00:50.772067 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 00:00:50.772076 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 00:00:50.772083 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 00:00:50.772091 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:00:50.772099 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:00:50.772109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:00:50.772117 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 00:00:50.772125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:00:50.772135 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 00:00:50.772144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 00:00:50.772177 systemd-journald[246]: Collecting audit messages is disabled. Sep 11 00:00:50.772196 systemd-journald[246]: Journal started Sep 11 00:00:50.772216 systemd-journald[246]: Runtime Journal (/run/log/journal/63268978866b4b428eea11dbe6fd2f9a) is 6M, max 48.5M, 42.4M free. Sep 11 00:00:50.773497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:00:50.762203 systemd-modules-load[247]: Inserted module 'overlay' Sep 11 00:00:50.776744 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:00:50.776761 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 00:00:50.779854 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 11 00:00:50.780686 kernel: Bridge firewalling registered Sep 11 00:00:50.780895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 00:00:50.782390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:00:50.783841 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:00:50.791515 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 00:00:50.793808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:00:50.795185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:00:50.796832 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 00:00:50.800469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:00:50.807149 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:00:50.809067 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:00:50.811940 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:00:50.813789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:00:50.816490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 00:00:50.835474 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 11 00:00:50.849317 systemd-resolved[290]: Positive Trust Anchors: Sep 11 00:00:50.849334 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:00:50.849382 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:00:50.854182 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 11 00:00:50.855390 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:00:50.857717 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:00:50.906376 kernel: SCSI subsystem initialized Sep 11 00:00:50.910370 kernel: Loading iSCSI transport class v2.0-870. Sep 11 00:00:50.918399 kernel: iscsi: registered transport (tcp) Sep 11 00:00:50.930360 kernel: iscsi: registered transport (qla4xxx) Sep 11 00:00:50.930387 kernel: QLogic iSCSI HBA Driver Sep 11 00:00:50.947303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:00:50.967702 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:00:50.969019 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:00:51.016170 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 00:00:51.018889 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 00:00:51.080385 kernel: raid6: neonx8 gen() 15624 MB/s Sep 11 00:00:51.097361 kernel: raid6: neonx4 gen() 15695 MB/s Sep 11 00:00:51.114367 kernel: raid6: neonx2 gen() 13149 MB/s Sep 11 00:00:51.131368 kernel: raid6: neonx1 gen() 10420 MB/s Sep 11 00:00:51.148360 kernel: raid6: int64x8 gen() 6832 MB/s Sep 11 00:00:51.165369 kernel: raid6: int64x4 gen() 7327 MB/s Sep 11 00:00:51.182368 kernel: raid6: int64x2 gen() 6064 MB/s Sep 11 00:00:51.199365 kernel: raid6: int64x1 gen() 5028 MB/s Sep 11 00:00:51.199396 kernel: raid6: using algorithm neonx4 gen() 15695 MB/s Sep 11 00:00:51.216379 kernel: raid6: .... xor() 12281 MB/s, rmw enabled Sep 11 00:00:51.216406 kernel: raid6: using neon recovery algorithm Sep 11 00:00:51.221476 kernel: xor: measuring software checksum speed Sep 11 00:00:51.221493 kernel: 8regs : 21404 MB/sec Sep 11 00:00:51.222542 kernel: 32regs : 21687 MB/sec Sep 11 00:00:51.222555 kernel: arm64_neon : 26760 MB/sec Sep 11 00:00:51.222564 kernel: xor: using function: arm64_neon (26760 MB/sec) Sep 11 00:00:51.274371 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 00:00:51.281112 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:00:51.283439 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:00:51.310951 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 11 00:00:51.315491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:00:51.318367 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 00:00:51.355091 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Sep 11 00:00:51.377968 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:00:51.380074 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:00:51.429297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:00:51.431562 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 00:00:51.481379 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 00:00:51.483848 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 00:00:51.490633 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:00:51.490721 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:00:51.496337 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:00:51.497904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:00:51.501374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 00:00:51.501402 kernel: GPT:9289727 != 19775487 Sep 11 00:00:51.501417 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 00:00:51.501426 kernel: GPT:9289727 != 19775487 Sep 11 00:00:51.501434 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 00:00:51.501443 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:00:51.522254 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 00:00:51.529369 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:00:51.535418 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 00:00:51.543487 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 00:00:51.554191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 00:00:51.555243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 00:00:51.563972 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:00:51.565018 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:00:51.566653 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:00:51.568203 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:00:51.570634 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 00:00:51.572213 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 00:00:51.596364 disk-uuid[592]: Primary Header is updated. Sep 11 00:00:51.596364 disk-uuid[592]: Secondary Entries is updated. Sep 11 00:00:51.596364 disk-uuid[592]: Secondary Header is updated. Sep 11 00:00:51.601431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:00:51.602103 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:00:52.611396 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 00:00:52.612031 disk-uuid[597]: The operation has completed successfully. Sep 11 00:00:52.637252 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 00:00:52.637388 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 00:00:52.662850 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 00:00:52.683182 sh[611]: Success Sep 11 00:00:52.694784 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 00:00:52.694823 kernel: device-mapper: uevent: version 1.0.3 Sep 11 00:00:52.695646 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 00:00:52.702388 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 00:00:52.728105 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 00:00:52.730417 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 00:00:52.745373 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 00:00:52.752553 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) Sep 11 00:00:52.752586 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 11 00:00:52.752597 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 00:00:52.756655 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 00:00:52.756693 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 00:00:52.757563 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 00:00:52.758560 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:00:52.759634 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 00:00:52.760327 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 00:00:52.762963 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 00:00:52.783441 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (656) Sep 11 00:00:52.783474 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 11 00:00:52.784947 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 00:00:52.787363 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:00:52.787397 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:00:52.791365 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 11 00:00:52.791768 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 00:00:52.793645 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 00:00:52.863014 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:00:52.865526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:00:52.900043 systemd-networkd[804]: lo: Link UP Sep 11 00:00:52.900053 systemd-networkd[804]: lo: Gained carrier Sep 11 00:00:52.900799 systemd-networkd[804]: Enumeration completed Sep 11 00:00:52.901170 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:00:52.901174 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:00:52.901587 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:00:52.901928 systemd-networkd[804]: eth0: Link UP Sep 11 00:00:52.907314 ignition[698]: Ignition 2.21.0 Sep 11 00:00:52.902018 systemd-networkd[804]: eth0: Gained carrier Sep 11 00:00:52.907320 ignition[698]: Stage: fetch-offline Sep 11 00:00:52.902027 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:00:52.907370 ignition[698]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:52.902624 systemd[1]: Reached target network.target - Network. Sep 11 00:00:52.907378 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:52.907533 ignition[698]: parsed url from cmdline: "" Sep 11 00:00:52.907536 ignition[698]: no config URL provided Sep 11 00:00:52.907540 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 00:00:52.907546 ignition[698]: no config at "/usr/lib/ignition/user.ign" Sep 11 00:00:52.907563 ignition[698]: op(1): [started] loading QEMU firmware config module Sep 11 00:00:52.907572 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 00:00:52.918073 ignition[698]: op(1): [finished] loading QEMU firmware config module Sep 11 00:00:52.919390 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:00:52.967588 ignition[698]: parsing config with SHA512: 640af9c5973de39773757c338241625cbff1467f4c2c65f831cabe99a29b1aebeaacb001fbfe2386c3f85a019acae8c1072d6038764f21f0da32fe0ce4e63f76 Sep 11 00:00:52.972488 unknown[698]: fetched base config from "system" Sep 11 00:00:52.972500 unknown[698]: fetched user config from "qemu" Sep 11 00:00:52.972872 ignition[698]: fetch-offline: fetch-offline passed Sep 11 00:00:52.972921 ignition[698]: Ignition finished successfully Sep 11 00:00:52.977270 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:00:52.978760 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 00:00:52.979511 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 00:00:53.008592 ignition[812]: Ignition 2.21.0 Sep 11 00:00:53.008607 ignition[812]: Stage: kargs Sep 11 00:00:53.008746 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:53.008756 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:53.010604 ignition[812]: kargs: kargs passed Sep 11 00:00:53.012953 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 00:00:53.010679 ignition[812]: Ignition finished successfully Sep 11 00:00:53.015048 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 00:00:53.037470 ignition[820]: Ignition 2.21.0 Sep 11 00:00:53.037485 ignition[820]: Stage: disks Sep 11 00:00:53.038461 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:53.038472 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:53.039602 ignition[820]: disks: disks passed Sep 11 00:00:53.039665 ignition[820]: Ignition finished successfully Sep 11 00:00:53.041245 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 00:00:53.042271 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 00:00:53.044146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 00:00:53.046296 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:00:53.048282 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:00:53.050191 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:00:53.052902 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 00:00:53.078522 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 00:00:53.083205 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 00:00:53.085146 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 00:00:53.152391 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 11 00:00:53.152976 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 00:00:53.154044 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 00:00:53.156689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:00:53.158671 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 00:00:53.159460 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 00:00:53.159499 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 00:00:53.159521 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:00:53.178638 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 00:00:53.180437 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 00:00:53.188399 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 11 00:00:53.188442 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 11 00:00:53.188453 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 00:00:53.191787 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:00:53.191850 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:00:53.193063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:00:53.215122 initrd-setup-root[863]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 00:00:53.219392 initrd-setup-root[870]: cut: /sysroot/etc/group: No such file or directory Sep 11 00:00:53.223208 initrd-setup-root[877]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 00:00:53.226910 initrd-setup-root[884]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 00:00:53.292673 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 00:00:53.296436 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 00:00:53.297849 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 00:00:53.315372 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 11 00:00:53.326387 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 00:00:53.336861 ignition[953]: INFO : Ignition 2.21.0 Sep 11 00:00:53.336861 ignition[953]: INFO : Stage: mount Sep 11 00:00:53.338544 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:53.338544 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:53.338544 ignition[953]: INFO : mount: mount passed Sep 11 00:00:53.338544 ignition[953]: INFO : Ignition finished successfully Sep 11 00:00:53.340695 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 00:00:53.342371 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 00:00:53.758389 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 00:00:53.759878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 00:00:53.788888 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (965) Sep 11 00:00:53.788923 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 11 00:00:53.788933 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 00:00:53.791846 kernel: BTRFS info (device vda6): turning on async discard Sep 11 00:00:53.791887 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 00:00:53.793241 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 00:00:53.824355 ignition[982]: INFO : Ignition 2.21.0 Sep 11 00:00:53.824355 ignition[982]: INFO : Stage: files Sep 11 00:00:53.825821 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:53.825821 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:53.825821 ignition[982]: DEBUG : files: compiled without relabeling support, skipping Sep 11 00:00:53.828633 ignition[982]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 00:00:53.828633 ignition[982]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 00:00:53.828633 ignition[982]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 00:00:53.828633 ignition[982]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 00:00:53.828633 ignition[982]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 00:00:53.828139 unknown[982]: wrote ssh authorized keys file for user: core Sep 11 00:00:53.835159 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 11 00:00:53.835159 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 11 00:00:53.902133 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 00:00:53.947529 systemd-networkd[804]: eth0: Gained IPv6LL Sep 11 00:00:54.300458 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:00:54.301956 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 00:00:54.313628 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 11 00:00:54.708979 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 11 00:00:55.257683 ignition[982]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 11 00:00:55.257683 ignition[982]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 11 00:00:55.261338 ignition[982]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:00:55.265686 ignition[982]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 00:00:55.265686 ignition[982]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 11 00:00:55.265686 ignition[982]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 11 00:00:55.265686 ignition[982]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:00:55.273265 ignition[982]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 00:00:55.273265 ignition[982]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 11 00:00:55.273265 ignition[982]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 00:00:55.289157 ignition[982]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:00:55.293288 ignition[982]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 00:00:55.295988 ignition[982]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 00:00:55.295988 ignition[982]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 11 00:00:55.295988 ignition[982]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 00:00:55.295988 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:00:55.295988 ignition[982]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 00:00:55.295988 ignition[982]: INFO : files: files passed Sep 11 00:00:55.295988 ignition[982]: INFO : Ignition finished successfully Sep 11 00:00:55.297525 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 00:00:55.306108 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 00:00:55.322997 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 00:00:55.326309 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 00:00:55.326535 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 00:00:55.331765 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 00:00:55.334473 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:00:55.334473 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:00:55.337325 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 00:00:55.337127 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:00:55.341115 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 00:00:55.343508 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 00:00:55.383632 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 00:00:55.383780 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 00:00:55.385552 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 00:00:55.387264 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 00:00:55.388189 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 00:00:55.388998 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 00:00:55.412729 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:00:55.415013 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 00:00:55.440596 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:00:55.441717 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:00:55.443358 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 00:00:55.444966 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 00:00:55.445091 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 00:00:55.447238 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 00:00:55.449118 systemd[1]: Stopped target basic.target - Basic System. Sep 11 00:00:55.451256 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 00:00:55.453544 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 00:00:55.456313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 00:00:55.457947 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 00:00:55.459680 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 00:00:55.461268 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 00:00:55.463864 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 00:00:55.465329 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 00:00:55.466775 systemd[1]: Stopped target swap.target - Swaps. Sep 11 00:00:55.468091 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 00:00:55.468223 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 00:00:55.470140 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:00:55.471626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:00:55.473214 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 00:00:55.476424 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:00:55.477376 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 00:00:55.477492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 00:00:55.479993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 00:00:55.480106 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 00:00:55.481795 systemd[1]: Stopped target paths.target - Path Units. Sep 11 00:00:55.483144 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 00:00:55.486450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:00:55.487564 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 00:00:55.489334 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 00:00:55.490834 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 00:00:55.490919 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 00:00:55.493219 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 00:00:55.493293 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 00:00:55.497532 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 00:00:55.497660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 00:00:55.502113 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 00:00:55.502796 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 00:00:55.505163 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 00:00:55.506483 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 00:00:55.506604 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:00:55.521027 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 00:00:55.521821 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 00:00:55.521940 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:00:55.523479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 00:00:55.523574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 00:00:55.532220 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 00:00:55.532312 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 00:00:55.536251 ignition[1037]: INFO : Ignition 2.21.0 Sep 11 00:00:55.536251 ignition[1037]: INFO : Stage: umount Sep 11 00:00:55.536251 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 00:00:55.536251 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 00:00:55.536251 ignition[1037]: INFO : umount: umount passed Sep 11 00:00:55.536251 ignition[1037]: INFO : Ignition finished successfully Sep 11 00:00:55.537318 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 00:00:55.538551 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 00:00:55.538739 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 00:00:55.541402 systemd[1]: Stopped target network.target - Network. Sep 11 00:00:55.542587 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 00:00:55.542675 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 00:00:55.544148 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 00:00:55.544301 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 00:00:55.545778 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 00:00:55.545834 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 00:00:55.547645 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 00:00:55.547704 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 00:00:55.550027 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 00:00:55.551659 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 00:00:55.562147 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 00:00:55.562289 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 00:00:55.565852 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 00:00:55.566085 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 00:00:55.566180 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 00:00:55.570112 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 00:00:55.570694 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 00:00:55.572393 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 00:00:55.572432 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:00:55.575133 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 00:00:55.576125 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 00:00:55.576207 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 00:00:55.578302 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 00:00:55.578340 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:00:55.580572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 00:00:55.580614 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 00:00:55.582311 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 00:00:55.583016 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:00:55.585132 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:00:55.587890 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 00:00:55.587963 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:00:55.601129 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 00:00:55.601270 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 00:00:55.605035 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 00:00:55.606182 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:00:55.608700 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 00:00:55.608790 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 00:00:55.610162 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 00:00:55.610217 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 00:00:55.611285 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 00:00:55.611318 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:00:55.612879 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 00:00:55.612932 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 00:00:55.615169 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 00:00:55.615218 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 00:00:55.616811 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 00:00:55.616859 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 00:00:55.618980 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 00:00:55.619025 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 00:00:55.621176 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 00:00:55.622490 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 00:00:55.622542 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:00:55.625168 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 00:00:55.625205 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:00:55.627653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 00:00:55.627694 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:00:55.631279 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 11 00:00:55.631331 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 11 00:00:55.631406 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 11 00:00:55.640603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 00:00:55.640728 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 00:00:55.642742 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 00:00:55.644951 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 00:00:55.668115 systemd[1]: Switching root. Sep 11 00:00:55.686216 systemd-journald[246]: Journal stopped Sep 11 00:00:56.419534 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Sep 11 00:00:56.419583 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 00:00:56.419600 kernel: SELinux: policy capability open_perms=1 Sep 11 00:00:56.419612 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 00:00:56.419622 kernel: SELinux: policy capability always_check_network=0 Sep 11 00:00:56.419632 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 00:00:56.419655 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 00:00:56.419665 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 00:00:56.419675 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 00:00:56.419685 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 00:00:56.419695 kernel: audit: type=1403 audit(1757548855.844:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 00:00:56.419709 systemd[1]: Successfully loaded SELinux policy in 47.027ms. Sep 11 00:00:56.419726 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.715ms. Sep 11 00:00:56.419737 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 00:00:56.419748 systemd[1]: Detected virtualization kvm. Sep 11 00:00:56.419758 systemd[1]: Detected architecture arm64. Sep 11 00:00:56.419767 systemd[1]: Detected first boot. Sep 11 00:00:56.419777 systemd[1]: Initializing machine ID from VM UUID. Sep 11 00:00:56.419789 zram_generator::config[1083]: No configuration found. Sep 11 00:00:56.419800 kernel: NET: Registered PF_VSOCK protocol family Sep 11 00:00:56.419810 systemd[1]: Populated /etc with preset unit settings. Sep 11 00:00:56.419820 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 00:00:56.419830 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 00:00:56.419840 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 00:00:56.419850 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 00:00:56.419860 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 00:00:56.419872 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 00:00:56.419883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 00:00:56.419893 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 00:00:56.419902 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 00:00:56.419912 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 00:00:56.419923 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 00:00:56.419933 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 00:00:56.419943 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 00:00:56.419953 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 00:00:56.419965 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 00:00:56.419975 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 00:00:56.419985 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 00:00:56.419995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 00:00:56.420005 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 00:00:56.420015 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 00:00:56.420026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 00:00:56.420037 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 00:00:56.420048 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 00:00:56.420058 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 00:00:56.420068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 00:00:56.420078 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 00:00:56.420091 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 00:00:56.420101 systemd[1]: Reached target slices.target - Slice Units. Sep 11 00:00:56.420111 systemd[1]: Reached target swap.target - Swaps. Sep 11 00:00:56.420121 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 00:00:56.420136 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 00:00:56.420146 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 00:00:56.420156 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 00:00:56.420166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 00:00:56.420177 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 00:00:56.420187 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 00:00:56.420197 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 00:00:56.420206 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 00:00:56.420216 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 00:00:56.420227 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 00:00:56.420237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 00:00:56.420247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 00:00:56.420257 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 00:00:56.420267 systemd[1]: Reached target machines.target - Containers. Sep 11 00:00:56.420277 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 00:00:56.420287 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:00:56.420297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 00:00:56.420307 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 00:00:56.420319 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:00:56.420329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:00:56.420339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:00:56.420375 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 00:00:56.420387 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:00:56.420397 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 00:00:56.420407 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 00:00:56.420418 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 00:00:56.420430 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 00:00:56.420440 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 00:00:56.420451 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:00:56.420460 kernel: loop: module loaded Sep 11 00:00:56.420469 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 00:00:56.420479 kernel: fuse: init (API version 7.41) Sep 11 00:00:56.420489 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 00:00:56.420499 kernel: ACPI: bus type drm_connector registered Sep 11 00:00:56.420509 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 00:00:56.420521 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 00:00:56.420531 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 00:00:56.420541 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 00:00:56.420551 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 00:00:56.420561 systemd[1]: Stopped verity-setup.service. Sep 11 00:00:56.420574 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 00:00:56.420584 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 00:00:56.420594 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 00:00:56.420635 systemd-journald[1155]: Collecting audit messages is disabled. Sep 11 00:00:56.420665 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 00:00:56.420677 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 00:00:56.420688 systemd-journald[1155]: Journal started Sep 11 00:00:56.420712 systemd-journald[1155]: Runtime Journal (/run/log/journal/63268978866b4b428eea11dbe6fd2f9a) is 6M, max 48.5M, 42.4M free. Sep 11 00:00:56.213674 systemd[1]: Queued start job for default target multi-user.target. Sep 11 00:00:56.231455 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 00:00:56.231857 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 00:00:56.423370 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 00:00:56.423930 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 00:00:56.425017 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 00:00:56.426437 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 00:00:56.427773 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 00:00:56.427944 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 00:00:56.429410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:00:56.429572 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:00:56.430743 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:00:56.430905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:00:56.432173 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:00:56.432336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:00:56.433486 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 00:00:56.433634 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 00:00:56.434686 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:00:56.434850 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:00:56.436155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 00:00:56.437398 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 00:00:56.438758 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 00:00:56.439999 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 00:00:56.453461 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 00:00:56.456015 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 00:00:56.458082 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 00:00:56.459114 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 00:00:56.459160 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 00:00:56.461044 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 00:00:56.470498 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 00:00:56.471461 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:00:56.472686 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 00:00:56.474642 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 00:00:56.475695 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:00:56.478486 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 00:00:56.481425 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:00:56.482639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 00:00:56.485508 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 00:00:56.486072 systemd-journald[1155]: Time spent on flushing to /var/log/journal/63268978866b4b428eea11dbe6fd2f9a is 29.625ms for 887 entries. Sep 11 00:00:56.486072 systemd-journald[1155]: System Journal (/var/log/journal/63268978866b4b428eea11dbe6fd2f9a) is 8M, max 195.6M, 187.6M free. Sep 11 00:00:56.530568 systemd-journald[1155]: Received client request to flush runtime journal. Sep 11 00:00:56.530619 kernel: loop0: detected capacity change from 0 to 107312 Sep 11 00:00:56.530637 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 00:00:56.489308 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 00:00:56.493239 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 00:00:56.496717 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 00:00:56.499631 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 00:00:56.501252 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 00:00:56.508838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 00:00:56.512535 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 00:00:56.515391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 00:00:56.532800 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 00:00:56.539396 kernel: loop1: detected capacity change from 0 to 211168 Sep 11 00:00:56.547471 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 00:00:56.550582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 00:00:56.552966 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 00:00:56.583369 kernel: loop2: detected capacity change from 0 to 138376 Sep 11 00:00:56.584208 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 11 00:00:56.584230 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 11 00:00:56.591859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 00:00:56.625365 kernel: loop3: detected capacity change from 0 to 107312 Sep 11 00:00:56.633377 kernel: loop4: detected capacity change from 0 to 211168 Sep 11 00:00:56.647782 kernel: loop5: detected capacity change from 0 to 138376 Sep 11 00:00:56.658709 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 00:00:56.659097 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 11 00:00:56.664972 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 00:00:56.665120 systemd[1]: Reloading... Sep 11 00:00:56.731388 zram_generator::config[1249]: No configuration found. Sep 11 00:00:56.828261 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 00:00:56.832379 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:00:56.897389 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 00:00:56.897958 systemd[1]: Reloading finished in 232 ms. Sep 11 00:00:56.935395 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 00:00:56.936656 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 00:00:56.951973 systemd[1]: Starting ensure-sysext.service... Sep 11 00:00:56.953912 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 00:00:56.962249 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Sep 11 00:00:56.962270 systemd[1]: Reloading... Sep 11 00:00:56.974307 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 00:00:56.974359 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 00:00:56.974618 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 00:00:56.974819 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 00:00:56.975537 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 00:00:56.975804 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 11 00:00:56.975857 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 11 00:00:56.978459 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:00:56.978471 systemd-tmpfiles[1284]: Skipping /boot Sep 11 00:00:56.989564 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 00:00:56.989582 systemd-tmpfiles[1284]: Skipping /boot Sep 11 00:00:57.011432 zram_generator::config[1311]: No configuration found. Sep 11 00:00:57.078847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:00:57.142256 systemd[1]: Reloading finished in 179 ms. Sep 11 00:00:57.152168 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 00:00:57.157830 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 00:00:57.166458 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:00:57.168672 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 00:00:57.170635 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 00:00:57.173388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 00:00:57.175740 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 00:00:57.179494 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 00:00:57.187574 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 00:00:57.190253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:00:57.191956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 00:00:57.196615 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 00:00:57.198755 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 00:00:57.200037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:00:57.200170 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:00:57.202840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:00:57.202986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:00:57.203065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:00:57.206687 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 00:00:57.213275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 00:00:57.214444 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 00:00:57.214562 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 00:00:57.215206 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 11 00:00:57.215754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 00:00:57.219165 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 00:00:57.219376 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 00:00:57.220976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 00:00:57.221137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 00:00:57.226796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 00:00:57.231518 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 00:00:57.235842 systemd[1]: Finished ensure-sysext.service. Sep 11 00:00:57.237321 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 00:00:57.238651 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 00:00:57.240317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 00:00:57.249405 augenrules[1397]: No rules Sep 11 00:00:57.251430 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:00:57.252703 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:00:57.254408 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 00:00:57.261060 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 00:00:57.263496 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 00:00:57.273117 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 00:00:57.274906 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 00:00:57.274980 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 00:00:57.276909 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 00:00:57.279388 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 00:00:57.280814 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 00:00:57.304943 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 00:00:57.313905 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 00:00:57.357124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 00:00:57.360923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 00:00:57.394375 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 00:00:57.424144 systemd-resolved[1350]: Positive Trust Anchors: Sep 11 00:00:57.424163 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 00:00:57.424195 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 00:00:57.430222 systemd-networkd[1423]: lo: Link UP Sep 11 00:00:57.430233 systemd-networkd[1423]: lo: Gained carrier Sep 11 00:00:57.430698 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 00:00:57.431464 systemd-networkd[1423]: Enumeration completed Sep 11 00:00:57.432088 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:00:57.432098 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 00:00:57.433582 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 00:00:57.433760 systemd-networkd[1423]: eth0: Link UP Sep 11 00:00:57.433879 systemd-networkd[1423]: eth0: Gained carrier Sep 11 00:00:57.433898 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 00:00:57.434654 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 00:00:57.437014 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 00:00:57.437970 systemd-resolved[1350]: Defaulting to hostname 'linux'. Sep 11 00:00:57.439433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 00:00:57.441518 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 00:00:57.442638 systemd[1]: Reached target network.target - Network. Sep 11 00:00:57.443414 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 00:00:57.444380 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 00:00:57.445260 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 00:00:57.447565 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 00:00:57.448476 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 00:00:57.448695 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 00:00:57.449624 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 00:00:57.450014 systemd-timesyncd[1424]: Network configuration changed, trying to establish connection. Sep 11 00:00:57.450720 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 00:00:57.451767 systemd-timesyncd[1424]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 00:00:57.451808 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 00:00:57.451836 systemd[1]: Reached target paths.target - Path Units. Sep 11 00:00:57.451859 systemd-timesyncd[1424]: Initial clock synchronization to Thu 2025-09-11 00:00:57.171993 UTC. Sep 11 00:00:57.452631 systemd[1]: Reached target timers.target - Timer Units. Sep 11 00:00:57.454170 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 00:00:57.456906 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 00:00:57.460516 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 00:00:57.461660 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 00:00:57.462683 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 00:00:57.472005 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 00:00:57.474125 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 00:00:57.475866 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 00:00:57.476893 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 00:00:57.477726 systemd[1]: Reached target basic.target - Basic System. Sep 11 00:00:57.478450 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:00:57.478477 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 00:00:57.482073 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 00:00:57.486174 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 00:00:57.487928 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 00:00:57.499499 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 00:00:57.502595 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 00:00:57.503535 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 00:00:57.505153 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 00:00:57.507580 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 00:00:57.509780 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 00:00:57.515419 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 00:00:57.519006 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 00:00:57.521315 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 00:00:57.521806 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 00:00:57.523984 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 00:00:57.526579 jq[1460]: false Sep 11 00:00:57.527308 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 00:00:57.531381 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 00:00:57.534788 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 00:00:57.536465 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 00:00:57.536651 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 00:00:57.540688 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 00:00:57.541802 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 00:00:57.554185 jq[1474]: true Sep 11 00:00:57.555183 extend-filesystems[1461]: Found /dev/vda6 Sep 11 00:00:57.559973 extend-filesystems[1461]: Found /dev/vda9 Sep 11 00:00:57.561563 extend-filesystems[1461]: Checking size of /dev/vda9 Sep 11 00:00:57.563815 update_engine[1469]: I20250911 00:00:57.563586 1469 main.cc:92] Flatcar Update Engine starting Sep 11 00:00:57.564014 (ntainerd)[1489]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 00:00:57.566665 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 00:00:57.568928 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 00:00:57.570395 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 00:00:57.586542 tar[1481]: linux-arm64/LICENSE Sep 11 00:00:57.586542 tar[1481]: linux-arm64/helm Sep 11 00:00:57.592896 extend-filesystems[1461]: Resized partition /dev/vda9 Sep 11 00:00:57.594179 dbus-daemon[1456]: [system] SELinux support is enabled Sep 11 00:00:57.594758 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 00:00:57.597538 extend-filesystems[1503]: resize2fs 1.47.2 (1-Jan-2025) Sep 11 00:00:57.597665 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 00:00:57.600677 update_engine[1469]: I20250911 00:00:57.599256 1469 update_check_scheduler.cc:74] Next update check in 4m31s Sep 11 00:00:57.600708 jq[1496]: true Sep 11 00:00:57.597689 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 00:00:57.602157 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 00:00:57.602186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 00:00:57.605839 systemd[1]: Started update-engine.service - Update Engine. Sep 11 00:00:57.611508 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 00:00:57.614906 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 00:00:57.654670 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 00:00:57.669178 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 00:00:57.669178 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 00:00:57.669178 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 00:00:57.676315 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Sep 11 00:00:57.670097 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 00:00:57.671423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 00:00:57.672866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 00:00:57.683012 systemd-logind[1467]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 00:00:57.683660 systemd-logind[1467]: New seat seat0. Sep 11 00:00:57.685264 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 00:00:57.696918 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Sep 11 00:00:57.697815 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 00:00:57.701753 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 00:00:57.706120 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 00:00:57.792385 containerd[1489]: time="2025-09-11T00:00:57Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 00:00:57.793238 containerd[1489]: time="2025-09-11T00:00:57.793197800Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 11 00:00:57.803550 containerd[1489]: time="2025-09-11T00:00:57.803511680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.88µs" Sep 11 00:00:57.803664 containerd[1489]: time="2025-09-11T00:00:57.803636280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 00:00:57.803751 containerd[1489]: time="2025-09-11T00:00:57.803734760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 00:00:57.803963 containerd[1489]: time="2025-09-11T00:00:57.803941520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 00:00:57.804029 containerd[1489]: time="2025-09-11T00:00:57.804015840Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 00:00:57.804111 containerd[1489]: time="2025-09-11T00:00:57.804097360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804233 containerd[1489]: time="2025-09-11T00:00:57.804210520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804288 containerd[1489]: time="2025-09-11T00:00:57.804274400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804616 containerd[1489]: time="2025-09-11T00:00:57.804588960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804701 containerd[1489]: time="2025-09-11T00:00:57.804685720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804754 containerd[1489]: time="2025-09-11T00:00:57.804740480Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804816 containerd[1489]: time="2025-09-11T00:00:57.804803520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 00:00:57.804946 containerd[1489]: time="2025-09-11T00:00:57.804928400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 00:00:57.805215 containerd[1489]: time="2025-09-11T00:00:57.805190600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:00:57.805310 containerd[1489]: time="2025-09-11T00:00:57.805294080Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 00:00:57.805383 containerd[1489]: time="2025-09-11T00:00:57.805369480Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 00:00:57.805505 containerd[1489]: time="2025-09-11T00:00:57.805451480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 00:00:57.805827 containerd[1489]: time="2025-09-11T00:00:57.805810000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 00:00:57.805958 containerd[1489]: time="2025-09-11T00:00:57.805940040Z" level=info msg="metadata content store policy set" policy=shared Sep 11 00:00:57.809827 containerd[1489]: time="2025-09-11T00:00:57.809797840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 00:00:57.809971 containerd[1489]: time="2025-09-11T00:00:57.809954080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810057400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810077920Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810093120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810112880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810139760Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810152640Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810164640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810174480Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810183480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810204800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 00:00:57.810376 containerd[1489]: time="2025-09-11T00:00:57.810323120Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 00:00:57.810622 containerd[1489]: time="2025-09-11T00:00:57.810601600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 00:00:57.810698 containerd[1489]: time="2025-09-11T00:00:57.810682800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 00:00:57.810751 containerd[1489]: time="2025-09-11T00:00:57.810738240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 00:00:57.810803 containerd[1489]: time="2025-09-11T00:00:57.810790200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 00:00:57.810864 containerd[1489]: time="2025-09-11T00:00:57.810849240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 00:00:57.810915 containerd[1489]: time="2025-09-11T00:00:57.810903920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 00:00:57.810971 containerd[1489]: time="2025-09-11T00:00:57.810958880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 00:00:57.811027 containerd[1489]: time="2025-09-11T00:00:57.811014400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 00:00:57.811082 containerd[1489]: time="2025-09-11T00:00:57.811068480Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 00:00:57.811136 containerd[1489]: time="2025-09-11T00:00:57.811123760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 00:00:57.811400 containerd[1489]: time="2025-09-11T00:00:57.811381880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 00:00:57.811465 containerd[1489]: time="2025-09-11T00:00:57.811453800Z" level=info msg="Start snapshots syncer" Sep 11 00:00:57.811541 containerd[1489]: time="2025-09-11T00:00:57.811525640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 00:00:57.812860 containerd[1489]: time="2025-09-11T00:00:57.812812640Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 00:00:57.813061 containerd[1489]: time="2025-09-11T00:00:57.813040000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 00:00:57.813197 containerd[1489]: time="2025-09-11T00:00:57.813179880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 00:00:57.813409 containerd[1489]: time="2025-09-11T00:00:57.813386560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 00:00:57.813488 containerd[1489]: time="2025-09-11T00:00:57.813474480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 00:00:57.813542 containerd[1489]: time="2025-09-11T00:00:57.813529520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 00:00:57.813593 containerd[1489]: time="2025-09-11T00:00:57.813581480Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 00:00:57.813885 containerd[1489]: time="2025-09-11T00:00:57.813863600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 00:00:57.813955 containerd[1489]: time="2025-09-11T00:00:57.813941000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 00:00:57.814007 containerd[1489]: time="2025-09-11T00:00:57.813994640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 00:00:57.814086 containerd[1489]: time="2025-09-11T00:00:57.814071920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 00:00:57.814141 containerd[1489]: time="2025-09-11T00:00:57.814128240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 00:00:57.814197 containerd[1489]: time="2025-09-11T00:00:57.814184240Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 00:00:57.814307 containerd[1489]: time="2025-09-11T00:00:57.814290880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:00:57.814397 containerd[1489]: time="2025-09-11T00:00:57.814380800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 00:00:57.814448 containerd[1489]: time="2025-09-11T00:00:57.814436280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:00:57.814498 containerd[1489]: time="2025-09-11T00:00:57.814483600Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 00:00:57.814544 containerd[1489]: time="2025-09-11T00:00:57.814532360Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 00:00:57.814601 containerd[1489]: time="2025-09-11T00:00:57.814586840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 00:00:57.814665 containerd[1489]: time="2025-09-11T00:00:57.814649480Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 00:00:57.814804 containerd[1489]: time="2025-09-11T00:00:57.814790360Z" level=info msg="runtime interface created" Sep 11 00:00:57.814860 containerd[1489]: time="2025-09-11T00:00:57.814837720Z" level=info msg="created NRI interface" Sep 11 00:00:57.814910 containerd[1489]: time="2025-09-11T00:00:57.814898840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 00:00:57.814961 containerd[1489]: time="2025-09-11T00:00:57.814950240Z" level=info msg="Connect containerd service" Sep 11 00:00:57.815037 containerd[1489]: time="2025-09-11T00:00:57.815023120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 00:00:57.815951 containerd[1489]: time="2025-09-11T00:00:57.815919240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:00:57.896927 containerd[1489]: time="2025-09-11T00:00:57.896749200Z" level=info msg="Start subscribing containerd event" Sep 11 00:00:57.896927 containerd[1489]: time="2025-09-11T00:00:57.896829560Z" level=info msg="Start recovering state" Sep 11 00:00:57.896927 containerd[1489]: time="2025-09-11T00:00:57.896918040Z" level=info msg="Start event monitor" Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896933880Z" level=info msg="Start cni network conf syncer for default" Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896941760Z" level=info msg="Start streaming server" Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896951400Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896958560Z" level=info msg="runtime interface starting up..." Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896963800Z" level=info msg="starting plugins..." Sep 11 00:00:57.897045 containerd[1489]: time="2025-09-11T00:00:57.896975160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 00:00:57.897426 containerd[1489]: time="2025-09-11T00:00:57.897398840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 00:00:57.897531 containerd[1489]: time="2025-09-11T00:00:57.897519200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 00:00:57.897750 containerd[1489]: time="2025-09-11T00:00:57.897724120Z" level=info msg="containerd successfully booted in 0.105714s" Sep 11 00:00:57.897839 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 00:00:58.036121 tar[1481]: linux-arm64/README.md Sep 11 00:00:58.052552 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 00:00:58.779064 sshd_keygen[1482]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 00:00:58.798419 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 00:00:58.800848 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 00:00:58.817703 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 00:00:58.819387 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 00:00:58.821762 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 00:00:58.842911 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 00:00:58.845461 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 00:00:58.847227 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 00:00:58.848415 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 00:00:59.259495 systemd-networkd[1423]: eth0: Gained IPv6LL Sep 11 00:00:59.261885 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 00:00:59.263337 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 00:00:59.265482 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 00:00:59.267369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:00:59.269196 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 00:00:59.290445 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 00:00:59.290681 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 00:00:59.292104 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 00:00:59.293912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 00:00:59.835132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:00:59.836739 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 00:00:59.838991 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:00:59.841513 systemd[1]: Startup finished in 2.009s (kernel) + 5.246s (initrd) + 4.045s (userspace) = 11.301s. Sep 11 00:01:00.191156 kubelet[1598]: E0911 00:01:00.191040 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:01:00.193424 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:01:00.193552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:01:00.193871 systemd[1]: kubelet.service: Consumed 768ms CPU time, 258.9M memory peak. Sep 11 00:01:03.738710 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 00:01:03.739796 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:33892.service - OpenSSH per-connection server daemon (10.0.0.1:33892). Sep 11 00:01:03.808810 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 33892 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:03.810656 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:03.816471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 00:01:03.817662 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 00:01:03.822939 systemd-logind[1467]: New session 1 of user core. Sep 11 00:01:03.841321 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 00:01:03.844109 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 00:01:03.860598 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 00:01:03.862820 systemd-logind[1467]: New session c1 of user core. Sep 11 00:01:03.980579 systemd[1616]: Queued start job for default target default.target. Sep 11 00:01:04.001293 systemd[1616]: Created slice app.slice - User Application Slice. Sep 11 00:01:04.001323 systemd[1616]: Reached target paths.target - Paths. Sep 11 00:01:04.001376 systemd[1616]: Reached target timers.target - Timers. Sep 11 00:01:04.002511 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 00:01:04.010807 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 00:01:04.010860 systemd[1616]: Reached target sockets.target - Sockets. Sep 11 00:01:04.010895 systemd[1616]: Reached target basic.target - Basic System. Sep 11 00:01:04.010927 systemd[1616]: Reached target default.target - Main User Target. Sep 11 00:01:04.010952 systemd[1616]: Startup finished in 142ms. Sep 11 00:01:04.011118 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 00:01:04.012402 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 00:01:04.076646 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:33902.service - OpenSSH per-connection server daemon (10.0.0.1:33902). Sep 11 00:01:04.146134 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 33902 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.147599 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.152505 systemd-logind[1467]: New session 2 of user core. Sep 11 00:01:04.159554 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 00:01:04.210334 sshd[1629]: Connection closed by 10.0.0.1 port 33902 Sep 11 00:01:04.210653 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:04.220241 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:33902.service: Deactivated successfully. Sep 11 00:01:04.221724 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 00:01:04.223376 systemd-logind[1467]: Session 2 logged out. Waiting for processes to exit. Sep 11 00:01:04.224576 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:33916.service - OpenSSH per-connection server daemon (10.0.0.1:33916). Sep 11 00:01:04.225456 systemd-logind[1467]: Removed session 2. Sep 11 00:01:04.284030 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 33916 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.285206 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.289336 systemd-logind[1467]: New session 3 of user core. Sep 11 00:01:04.298500 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 00:01:04.346027 sshd[1637]: Connection closed by 10.0.0.1 port 33916 Sep 11 00:01:04.346324 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:04.355250 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:33916.service: Deactivated successfully. Sep 11 00:01:04.356769 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 00:01:04.357400 systemd-logind[1467]: Session 3 logged out. Waiting for processes to exit. Sep 11 00:01:04.360082 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:33926.service - OpenSSH per-connection server daemon (10.0.0.1:33926). Sep 11 00:01:04.360602 systemd-logind[1467]: Removed session 3. Sep 11 00:01:04.417721 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 33926 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.419022 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.422706 systemd-logind[1467]: New session 4 of user core. Sep 11 00:01:04.433530 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 00:01:04.485307 sshd[1645]: Connection closed by 10.0.0.1 port 33926 Sep 11 00:01:04.485734 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:04.497198 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:33926.service: Deactivated successfully. Sep 11 00:01:04.499582 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 00:01:04.500179 systemd-logind[1467]: Session 4 logged out. Waiting for processes to exit. Sep 11 00:01:04.502375 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:33940.service - OpenSSH per-connection server daemon (10.0.0.1:33940). Sep 11 00:01:04.502966 systemd-logind[1467]: Removed session 4. Sep 11 00:01:04.548183 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 33940 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.549850 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.553392 systemd-logind[1467]: New session 5 of user core. Sep 11 00:01:04.568506 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 00:01:04.622771 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 00:01:04.623054 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:01:04.637932 sudo[1654]: pam_unix(sudo:session): session closed for user root Sep 11 00:01:04.639316 sshd[1653]: Connection closed by 10.0.0.1 port 33940 Sep 11 00:01:04.639758 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:04.650293 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:33940.service: Deactivated successfully. Sep 11 00:01:04.651709 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 00:01:04.652406 systemd-logind[1467]: Session 5 logged out. Waiting for processes to exit. Sep 11 00:01:04.654477 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Sep 11 00:01:04.655838 systemd-logind[1467]: Removed session 5. Sep 11 00:01:04.719440 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.720830 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.724600 systemd-logind[1467]: New session 6 of user core. Sep 11 00:01:04.736546 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 00:01:04.785893 sudo[1664]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 00:01:04.786149 sudo[1664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:01:04.860465 sudo[1664]: pam_unix(sudo:session): session closed for user root Sep 11 00:01:04.865417 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 00:01:04.865663 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:01:04.874797 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 00:01:04.910758 augenrules[1686]: No rules Sep 11 00:01:04.911872 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 00:01:04.912063 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 00:01:04.913515 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 11 00:01:04.914777 sshd[1662]: Connection closed by 10.0.0.1 port 33942 Sep 11 00:01:04.915082 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:04.926315 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:33942.service: Deactivated successfully. Sep 11 00:01:04.927931 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 00:01:04.928718 systemd-logind[1467]: Session 6 logged out. Waiting for processes to exit. Sep 11 00:01:04.931736 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:33948.service - OpenSSH per-connection server daemon (10.0.0.1:33948). Sep 11 00:01:04.932470 systemd-logind[1467]: Removed session 6. Sep 11 00:01:04.989704 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 33948 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:01:04.990918 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:01:04.995390 systemd-logind[1467]: New session 7 of user core. Sep 11 00:01:05.007497 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 00:01:05.058600 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 00:01:05.059227 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 00:01:05.351622 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 00:01:05.369671 (dockerd)[1719]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 00:01:05.580891 dockerd[1719]: time="2025-09-11T00:01:05.580824287Z" level=info msg="Starting up" Sep 11 00:01:05.582123 dockerd[1719]: time="2025-09-11T00:01:05.582098654Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 00:01:05.625200 dockerd[1719]: time="2025-09-11T00:01:05.624946140Z" level=info msg="Loading containers: start." Sep 11 00:01:05.635365 kernel: Initializing XFRM netlink socket Sep 11 00:01:05.818851 systemd-networkd[1423]: docker0: Link UP Sep 11 00:01:05.821683 dockerd[1719]: time="2025-09-11T00:01:05.821623408Z" level=info msg="Loading containers: done." Sep 11 00:01:05.833836 dockerd[1719]: time="2025-09-11T00:01:05.833783510Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 00:01:05.833973 dockerd[1719]: time="2025-09-11T00:01:05.833877445Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 11 00:01:05.833998 dockerd[1719]: time="2025-09-11T00:01:05.833974180Z" level=info msg="Initializing buildkit" Sep 11 00:01:05.855302 dockerd[1719]: time="2025-09-11T00:01:05.855262407Z" level=info msg="Completed buildkit initialization" Sep 11 00:01:05.861788 dockerd[1719]: time="2025-09-11T00:01:05.861735199Z" level=info msg="Daemon has completed initialization" Sep 11 00:01:05.862755 dockerd[1719]: time="2025-09-11T00:01:05.861829725Z" level=info msg="API listen on /run/docker.sock" Sep 11 00:01:05.862391 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 00:01:06.432877 containerd[1489]: time="2025-09-11T00:01:06.432788139Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 11 00:01:06.992071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181580159.mount: Deactivated successfully. Sep 11 00:01:08.020833 containerd[1489]: time="2025-09-11T00:01:08.020778223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:08.021493 containerd[1489]: time="2025-09-11T00:01:08.021437829Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 11 00:01:08.022082 containerd[1489]: time="2025-09-11T00:01:08.022054236Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:08.024350 containerd[1489]: time="2025-09-11T00:01:08.024310047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:08.025359 containerd[1489]: time="2025-09-11T00:01:08.025298406Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.592470282s" Sep 11 00:01:08.025359 containerd[1489]: time="2025-09-11T00:01:08.025339703Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 11 00:01:08.026460 containerd[1489]: time="2025-09-11T00:01:08.026431067Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 11 00:01:09.079085 containerd[1489]: time="2025-09-11T00:01:09.079011206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:09.080256 containerd[1489]: time="2025-09-11T00:01:09.080212316Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 11 00:01:09.081658 containerd[1489]: time="2025-09-11T00:01:09.081627968Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:09.083950 containerd[1489]: time="2025-09-11T00:01:09.083915477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:09.084959 containerd[1489]: time="2025-09-11T00:01:09.084931288Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.058469893s" Sep 11 00:01:09.085010 containerd[1489]: time="2025-09-11T00:01:09.084964022Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 11 00:01:09.085727 containerd[1489]: time="2025-09-11T00:01:09.085704821Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 11 00:01:10.078548 containerd[1489]: time="2025-09-11T00:01:10.078501337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:10.079131 containerd[1489]: time="2025-09-11T00:01:10.079032019Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 11 00:01:10.080117 containerd[1489]: time="2025-09-11T00:01:10.080071896Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:10.082947 containerd[1489]: time="2025-09-11T00:01:10.082884618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:10.084159 containerd[1489]: time="2025-09-11T00:01:10.084117448Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 998.378057ms" Sep 11 00:01:10.084159 containerd[1489]: time="2025-09-11T00:01:10.084159193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 11 00:01:10.084835 containerd[1489]: time="2025-09-11T00:01:10.084760693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 11 00:01:10.443964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 00:01:10.445768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:01:10.623069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:10.626888 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 00:01:10.673838 kubelet[2005]: E0911 00:01:10.673775 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 00:01:10.677120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 00:01:10.677272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 00:01:10.677579 systemd[1]: kubelet.service: Consumed 150ms CPU time, 107M memory peak. Sep 11 00:01:11.102066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3817902280.mount: Deactivated successfully. Sep 11 00:01:11.462921 containerd[1489]: time="2025-09-11T00:01:11.462788185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:11.463976 containerd[1489]: time="2025-09-11T00:01:11.463926220Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 11 00:01:11.464948 containerd[1489]: time="2025-09-11T00:01:11.464913786Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:11.467040 containerd[1489]: time="2025-09-11T00:01:11.466992478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:11.467486 containerd[1489]: time="2025-09-11T00:01:11.467458036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.382652333s" Sep 11 00:01:11.467486 containerd[1489]: time="2025-09-11T00:01:11.467483677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 11 00:01:11.467931 containerd[1489]: time="2025-09-11T00:01:11.467877639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 11 00:01:12.061311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447835088.mount: Deactivated successfully. Sep 11 00:01:12.746520 containerd[1489]: time="2025-09-11T00:01:12.746462219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:12.748032 containerd[1489]: time="2025-09-11T00:01:12.747990905Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 11 00:01:12.749036 containerd[1489]: time="2025-09-11T00:01:12.748995587Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:12.752436 containerd[1489]: time="2025-09-11T00:01:12.752400200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:12.754312 containerd[1489]: time="2025-09-11T00:01:12.754255518Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.286338874s" Sep 11 00:01:12.754377 containerd[1489]: time="2025-09-11T00:01:12.754310779Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 11 00:01:12.754831 containerd[1489]: time="2025-09-11T00:01:12.754805501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 00:01:13.177600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4113253892.mount: Deactivated successfully. Sep 11 00:01:13.182222 containerd[1489]: time="2025-09-11T00:01:13.182173880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:01:13.183351 containerd[1489]: time="2025-09-11T00:01:13.183276859Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 00:01:13.184306 containerd[1489]: time="2025-09-11T00:01:13.184249775Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:01:13.186492 containerd[1489]: time="2025-09-11T00:01:13.186443114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 00:01:13.187858 containerd[1489]: time="2025-09-11T00:01:13.187816375Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 432.974863ms" Sep 11 00:01:13.187895 containerd[1489]: time="2025-09-11T00:01:13.187855470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 00:01:13.188337 containerd[1489]: time="2025-09-11T00:01:13.188307968Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 11 00:01:13.589256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803373156.mount: Deactivated successfully. Sep 11 00:01:15.324211 containerd[1489]: time="2025-09-11T00:01:15.324145940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:15.325223 containerd[1489]: time="2025-09-11T00:01:15.325015273Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 11 00:01:15.326053 containerd[1489]: time="2025-09-11T00:01:15.326017247Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:15.328726 containerd[1489]: time="2025-09-11T00:01:15.328688937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:15.330013 containerd[1489]: time="2025-09-11T00:01:15.329967070Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.141620673s" Sep 11 00:01:15.330013 containerd[1489]: time="2025-09-11T00:01:15.330005571Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 11 00:01:18.727112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:18.727264 systemd[1]: kubelet.service: Consumed 150ms CPU time, 107M memory peak. Sep 11 00:01:18.729293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:01:18.750645 systemd[1]: Reload requested from client PID 2162 ('systemctl') (unit session-7.scope)... Sep 11 00:01:18.750664 systemd[1]: Reloading... Sep 11 00:01:18.827376 zram_generator::config[2206]: No configuration found. Sep 11 00:01:18.940735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:01:19.029455 systemd[1]: Reloading finished in 278 ms. Sep 11 00:01:19.092942 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 00:01:19.093027 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 00:01:19.093327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:19.093390 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.1M memory peak. Sep 11 00:01:19.095032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:01:19.239092 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:19.255703 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:01:19.287956 kubelet[2251]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:01:19.287956 kubelet[2251]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:01:19.287956 kubelet[2251]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:01:19.288271 kubelet[2251]: I0911 00:01:19.287935 2251 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:01:19.734854 kubelet[2251]: I0911 00:01:19.734730 2251 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:01:19.734854 kubelet[2251]: I0911 00:01:19.734769 2251 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:01:19.735100 kubelet[2251]: I0911 00:01:19.735011 2251 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:01:19.758331 kubelet[2251]: I0911 00:01:19.758290 2251 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:01:19.759164 kubelet[2251]: E0911 00:01:19.759120 2251 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 11 00:01:19.769613 kubelet[2251]: I0911 00:01:19.769559 2251 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:01:19.772472 kubelet[2251]: I0911 00:01:19.772441 2251 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:01:19.774431 kubelet[2251]: I0911 00:01:19.774374 2251 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:01:19.774599 kubelet[2251]: I0911 00:01:19.774427 2251 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:01:19.774707 kubelet[2251]: I0911 00:01:19.774693 2251 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:01:19.774707 kubelet[2251]: I0911 00:01:19.774706 2251 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:01:19.775468 kubelet[2251]: I0911 00:01:19.775434 2251 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:01:19.777910 kubelet[2251]: I0911 00:01:19.777876 2251 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:01:19.777910 kubelet[2251]: I0911 00:01:19.777904 2251 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:01:19.778009 kubelet[2251]: I0911 00:01:19.777931 2251 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:01:19.778009 kubelet[2251]: I0911 00:01:19.777944 2251 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:01:19.779176 kubelet[2251]: I0911 00:01:19.779125 2251 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:01:19.779883 kubelet[2251]: I0911 00:01:19.779843 2251 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:01:19.780369 kubelet[2251]: W0911 00:01:19.780007 2251 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 00:01:19.780795 kubelet[2251]: E0911 00:01:19.780767 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:01:19.781361 kubelet[2251]: E0911 00:01:19.781291 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 11 00:01:19.782416 kubelet[2251]: I0911 00:01:19.782396 2251 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:01:19.782495 kubelet[2251]: I0911 00:01:19.782447 2251 server.go:1289] "Started kubelet" Sep 11 00:01:19.782787 kubelet[2251]: I0911 00:01:19.782757 2251 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:01:19.785194 kubelet[2251]: I0911 00:01:19.784366 2251 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:01:19.785194 kubelet[2251]: I0911 00:01:19.785169 2251 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:01:19.785603 kubelet[2251]: I0911 00:01:19.785576 2251 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:01:19.786038 kubelet[2251]: E0911 00:01:19.784919 2251 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.103:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18641164eed104ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 00:01:19.782413484 +0000 UTC m=+0.523290799,LastTimestamp:2025-09-11 00:01:19.782413484 +0000 UTC m=+0.523290799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 00:01:19.787123 kubelet[2251]: I0911 00:01:19.786908 2251 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:01:19.787458 kubelet[2251]: I0911 00:01:19.787438 2251 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:01:19.788512 kubelet[2251]: E0911 00:01:19.788488 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:19.788789 kubelet[2251]: I0911 00:01:19.788778 2251 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:01:19.789041 kubelet[2251]: I0911 00:01:19.789020 2251 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:01:19.790033 kubelet[2251]: I0911 00:01:19.789361 2251 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:01:19.790033 kubelet[2251]: E0911 00:01:19.789707 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="200ms" Sep 11 00:01:19.790033 kubelet[2251]: I0911 00:01:19.789808 2251 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:01:19.790189 kubelet[2251]: E0911 00:01:19.789815 2251 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:01:19.790189 kubelet[2251]: I0911 00:01:19.790100 2251 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:01:19.790454 kubelet[2251]: E0911 00:01:19.790420 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 11 00:01:19.793689 kubelet[2251]: I0911 00:01:19.793135 2251 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:01:19.806361 kubelet[2251]: I0911 00:01:19.806180 2251 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:01:19.806361 kubelet[2251]: I0911 00:01:19.806246 2251 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:01:19.806361 kubelet[2251]: I0911 00:01:19.806266 2251 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:01:19.813137 kubelet[2251]: I0911 00:01:19.813085 2251 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:01:19.814293 kubelet[2251]: I0911 00:01:19.814256 2251 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:01:19.814293 kubelet[2251]: I0911 00:01:19.814285 2251 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:01:19.814424 kubelet[2251]: I0911 00:01:19.814308 2251 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:01:19.814424 kubelet[2251]: I0911 00:01:19.814314 2251 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:01:19.814424 kubelet[2251]: E0911 00:01:19.814373 2251 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:01:19.815437 kubelet[2251]: E0911 00:01:19.815403 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 11 00:01:19.889205 kubelet[2251]: E0911 00:01:19.889125 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:19.915475 kubelet[2251]: E0911 00:01:19.915390 2251 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 00:01:19.989793 kubelet[2251]: E0911 00:01:19.989664 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:19.991306 kubelet[2251]: E0911 00:01:19.991279 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="400ms" Sep 11 00:01:20.026498 kubelet[2251]: I0911 00:01:20.026457 2251 policy_none.go:49] "None policy: Start" Sep 11 00:01:20.026498 kubelet[2251]: I0911 00:01:20.026486 2251 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:01:20.026589 kubelet[2251]: I0911 00:01:20.026516 2251 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:01:20.033189 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 00:01:20.058223 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 00:01:20.061657 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 00:01:20.074109 kubelet[2251]: E0911 00:01:20.074058 2251 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:01:20.074307 kubelet[2251]: I0911 00:01:20.074272 2251 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:01:20.074372 kubelet[2251]: I0911 00:01:20.074309 2251 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:01:20.074590 kubelet[2251]: I0911 00:01:20.074558 2251 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:01:20.075484 kubelet[2251]: E0911 00:01:20.075460 2251 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:01:20.075538 kubelet[2251]: E0911 00:01:20.075499 2251 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 00:01:20.125418 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 11 00:01:20.157663 kubelet[2251]: E0911 00:01:20.157631 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.161596 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 11 00:01:20.175401 kubelet[2251]: I0911 00:01:20.175381 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:01:20.175835 kubelet[2251]: E0911 00:01:20.175810 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 11 00:01:20.179546 kubelet[2251]: E0911 00:01:20.179395 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.182115 systemd[1]: Created slice kubepods-burstable-podc68c2bf2bd1d9265b712bc34a533fcd2.slice - libcontainer container kubepods-burstable-podc68c2bf2bd1d9265b712bc34a533fcd2.slice. Sep 11 00:01:20.183724 kubelet[2251]: E0911 00:01:20.183578 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.191865 kubelet[2251]: I0911 00:01:20.191832 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:20.191948 kubelet[2251]: I0911 00:01:20.191890 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:20.191989 kubelet[2251]: I0911 00:01:20.191964 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:20.192020 kubelet[2251]: I0911 00:01:20.191999 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:20.192040 kubelet[2251]: I0911 00:01:20.192021 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:20.192040 kubelet[2251]: I0911 00:01:20.192036 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:20.192077 kubelet[2251]: I0911 00:01:20.192053 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:20.192099 kubelet[2251]: I0911 00:01:20.192079 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:20.192120 kubelet[2251]: I0911 00:01:20.192107 2251 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:20.378015 kubelet[2251]: I0911 00:01:20.377868 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:01:20.379847 kubelet[2251]: E0911 00:01:20.378453 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 11 00:01:20.392033 kubelet[2251]: E0911 00:01:20.391977 2251 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.103:6443: connect: connection refused" interval="800ms" Sep 11 00:01:20.458495 kubelet[2251]: E0911 00:01:20.458421 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.459165 containerd[1489]: time="2025-09-11T00:01:20.459087329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:20.477021 containerd[1489]: time="2025-09-11T00:01:20.476974221Z" level=info msg="connecting to shim fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297" address="unix:///run/containerd/s/a49935fa6299228089d0a8b20b62f0101e4b2c85e5e465616ba98c27e53f15f2" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:20.482432 kubelet[2251]: E0911 00:01:20.482327 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.483994 containerd[1489]: time="2025-09-11T00:01:20.482905389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:20.484072 kubelet[2251]: E0911 00:01:20.483976 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.484391 containerd[1489]: time="2025-09-11T00:01:20.484361491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c68c2bf2bd1d9265b712bc34a533fcd2,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:20.507568 systemd[1]: Started cri-containerd-fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297.scope - libcontainer container fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297. Sep 11 00:01:20.521916 containerd[1489]: time="2025-09-11T00:01:20.521858441Z" level=info msg="connecting to shim ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f" address="unix:///run/containerd/s/5df6d370b0614a2caea139081e55ed1cdf0a75731786af587c2f0ab10a7bba14" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:20.524386 containerd[1489]: time="2025-09-11T00:01:20.523676432Z" level=info msg="connecting to shim 2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60" address="unix:///run/containerd/s/3ee67378e5e367d5a7364bbc1ffb17ba92664580bfd3df44f5d0e7a9417e11ba" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:20.554525 systemd[1]: Started cri-containerd-2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60.scope - libcontainer container 2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60. Sep 11 00:01:20.556036 systemd[1]: Started cri-containerd-ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f.scope - libcontainer container ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f. Sep 11 00:01:20.563759 containerd[1489]: time="2025-09-11T00:01:20.563713075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297\"" Sep 11 00:01:20.565125 kubelet[2251]: E0911 00:01:20.565099 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.570132 containerd[1489]: time="2025-09-11T00:01:20.570088860Z" level=info msg="CreateContainer within sandbox \"fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 00:01:20.579212 containerd[1489]: time="2025-09-11T00:01:20.579152423Z" level=info msg="Container 9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:20.588602 containerd[1489]: time="2025-09-11T00:01:20.588240301Z" level=info msg="CreateContainer within sandbox \"fdcb73c6f10e6f33f0d8c92328d7f0fb3e223b9f96f3e4778267d04bc7022297\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b\"" Sep 11 00:01:20.589577 containerd[1489]: time="2025-09-11T00:01:20.589492900Z" level=info msg="StartContainer for \"9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b\"" Sep 11 00:01:20.593161 containerd[1489]: time="2025-09-11T00:01:20.593108559Z" level=info msg="connecting to shim 9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b" address="unix:///run/containerd/s/a49935fa6299228089d0a8b20b62f0101e4b2c85e5e465616ba98c27e53f15f2" protocol=ttrpc version=3 Sep 11 00:01:20.601264 kubelet[2251]: E0911 00:01:20.601161 2251 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 11 00:01:20.602883 containerd[1489]: time="2025-09-11T00:01:20.602742306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f\"" Sep 11 00:01:20.603791 kubelet[2251]: E0911 00:01:20.603765 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.603910 containerd[1489]: time="2025-09-11T00:01:20.603780742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c68c2bf2bd1d9265b712bc34a533fcd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60\"" Sep 11 00:01:20.604275 kubelet[2251]: E0911 00:01:20.604250 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.607692 containerd[1489]: time="2025-09-11T00:01:20.607635478Z" level=info msg="CreateContainer within sandbox \"ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 00:01:20.609227 containerd[1489]: time="2025-09-11T00:01:20.609192353Z" level=info msg="CreateContainer within sandbox \"2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 00:01:20.616043 containerd[1489]: time="2025-09-11T00:01:20.615996104Z" level=info msg="Container dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:20.620572 containerd[1489]: time="2025-09-11T00:01:20.620495286Z" level=info msg="Container 548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:20.621581 systemd[1]: Started cri-containerd-9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b.scope - libcontainer container 9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b. Sep 11 00:01:20.626491 containerd[1489]: time="2025-09-11T00:01:20.626413199Z" level=info msg="CreateContainer within sandbox \"ef370437e3b66c88006826c70101e3c8f345029e3c14eee2147c6ea9d190d57f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741\"" Sep 11 00:01:20.627409 containerd[1489]: time="2025-09-11T00:01:20.627380846Z" level=info msg="StartContainer for \"dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741\"" Sep 11 00:01:20.628763 containerd[1489]: time="2025-09-11T00:01:20.628678720Z" level=info msg="connecting to shim dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741" address="unix:///run/containerd/s/5df6d370b0614a2caea139081e55ed1cdf0a75731786af587c2f0ab10a7bba14" protocol=ttrpc version=3 Sep 11 00:01:20.629522 containerd[1489]: time="2025-09-11T00:01:20.628776459Z" level=info msg="CreateContainer within sandbox \"2b84714954170eaf5aca10a658d24fe535334cc0810e0060f85e6f413e48cb60\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80\"" Sep 11 00:01:20.630257 containerd[1489]: time="2025-09-11T00:01:20.630223737Z" level=info msg="StartContainer for \"548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80\"" Sep 11 00:01:20.631322 containerd[1489]: time="2025-09-11T00:01:20.631295231Z" level=info msg="connecting to shim 548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80" address="unix:///run/containerd/s/3ee67378e5e367d5a7364bbc1ffb17ba92664580bfd3df44f5d0e7a9417e11ba" protocol=ttrpc version=3 Sep 11 00:01:20.652592 systemd[1]: Started cri-containerd-548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80.scope - libcontainer container 548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80. Sep 11 00:01:20.654363 systemd[1]: Started cri-containerd-dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741.scope - libcontainer container dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741. Sep 11 00:01:20.669564 containerd[1489]: time="2025-09-11T00:01:20.669522348Z" level=info msg="StartContainer for \"9dc97318d4042763757ac5ee022ae9ec08437c640fde1757d20a631138542b1b\" returns successfully" Sep 11 00:01:20.708392 containerd[1489]: time="2025-09-11T00:01:20.708336977Z" level=info msg="StartContainer for \"548f63224d7f69232ea77f2ad00537f3d08041d52500290a98c148ade0406a80\" returns successfully" Sep 11 00:01:20.708661 containerd[1489]: time="2025-09-11T00:01:20.708549263Z" level=info msg="StartContainer for \"dfd78934bd7de007f9e47a09d90b9ef1f65656a887d1f35c166d934cb406e741\" returns successfully" Sep 11 00:01:20.780636 kubelet[2251]: I0911 00:01:20.780601 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:01:20.781061 kubelet[2251]: E0911 00:01:20.780995 2251 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.103:6443/api/v1/nodes\": dial tcp 10.0.0.103:6443: connect: connection refused" node="localhost" Sep 11 00:01:20.821876 kubelet[2251]: E0911 00:01:20.821786 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.823610 kubelet[2251]: E0911 00:01:20.823417 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.824514 kubelet[2251]: E0911 00:01:20.824210 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.824514 kubelet[2251]: E0911 00:01:20.824323 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:20.826460 kubelet[2251]: E0911 00:01:20.826440 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:20.826589 kubelet[2251]: E0911 00:01:20.826558 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:21.583363 kubelet[2251]: I0911 00:01:21.583323 2251 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:01:21.829040 kubelet[2251]: E0911 00:01:21.828911 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:21.829040 kubelet[2251]: E0911 00:01:21.829015 2251 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 00:01:21.829197 kubelet[2251]: E0911 00:01:21.829057 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:21.829197 kubelet[2251]: E0911 00:01:21.829131 2251 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:22.316124 kubelet[2251]: E0911 00:01:22.316084 2251 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 00:01:22.398445 kubelet[2251]: I0911 00:01:22.398408 2251 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:01:22.398445 kubelet[2251]: E0911 00:01:22.398450 2251 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 00:01:22.409952 kubelet[2251]: E0911 00:01:22.409920 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:22.510363 kubelet[2251]: E0911 00:01:22.510312 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:22.610930 kubelet[2251]: E0911 00:01:22.610831 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:22.711392 kubelet[2251]: E0911 00:01:22.711337 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:22.811789 kubelet[2251]: E0911 00:01:22.811744 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:22.912186 kubelet[2251]: E0911 00:01:22.912034 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:23.012573 kubelet[2251]: E0911 00:01:23.012529 2251 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 00:01:23.089952 kubelet[2251]: I0911 00:01:23.089912 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:23.095254 kubelet[2251]: E0911 00:01:23.095217 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:23.095254 kubelet[2251]: I0911 00:01:23.095247 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:23.097092 kubelet[2251]: E0911 00:01:23.097062 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:23.097092 kubelet[2251]: I0911 00:01:23.097094 2251 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:23.098839 kubelet[2251]: E0911 00:01:23.098776 2251 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:23.780596 kubelet[2251]: I0911 00:01:23.780524 2251 apiserver.go:52] "Watching apiserver" Sep 11 00:01:23.791163 kubelet[2251]: I0911 00:01:23.791102 2251 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:01:24.183763 systemd[1]: Reload requested from client PID 2535 ('systemctl') (unit session-7.scope)... Sep 11 00:01:24.183780 systemd[1]: Reloading... Sep 11 00:01:24.259465 zram_generator::config[2578]: No configuration found. Sep 11 00:01:24.330240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 11 00:01:24.434048 systemd[1]: Reloading finished in 249 ms. Sep 11 00:01:24.454735 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:01:24.471408 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 00:01:24.471683 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:24.471748 systemd[1]: kubelet.service: Consumed 902ms CPU time, 127.8M memory peak. Sep 11 00:01:24.473647 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 00:01:24.649100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 00:01:24.654189 (kubelet)[2620]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 00:01:24.689860 kubelet[2620]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:01:24.689860 kubelet[2620]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 00:01:24.689860 kubelet[2620]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 00:01:24.689860 kubelet[2620]: I0911 00:01:24.689807 2620 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 00:01:24.697362 kubelet[2620]: I0911 00:01:24.697299 2620 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 11 00:01:24.697362 kubelet[2620]: I0911 00:01:24.697334 2620 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 00:01:24.697980 kubelet[2620]: I0911 00:01:24.697562 2620 server.go:956] "Client rotation is on, will bootstrap in background" Sep 11 00:01:24.699083 kubelet[2620]: I0911 00:01:24.699044 2620 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 11 00:01:24.701293 kubelet[2620]: I0911 00:01:24.701269 2620 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 00:01:24.705263 kubelet[2620]: I0911 00:01:24.705242 2620 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 00:01:24.707920 kubelet[2620]: I0911 00:01:24.707897 2620 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 00:01:24.708195 kubelet[2620]: I0911 00:01:24.708171 2620 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 00:01:24.708423 kubelet[2620]: I0911 00:01:24.708251 2620 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 00:01:24.708552 kubelet[2620]: I0911 00:01:24.708539 2620 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 00:01:24.708603 kubelet[2620]: I0911 00:01:24.708595 2620 container_manager_linux.go:303] "Creating device plugin manager" Sep 11 00:01:24.708693 kubelet[2620]: I0911 00:01:24.708683 2620 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:01:24.708998 kubelet[2620]: I0911 00:01:24.708982 2620 kubelet.go:480] "Attempting to sync node with API server" Sep 11 00:01:24.709066 kubelet[2620]: I0911 00:01:24.709058 2620 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 00:01:24.709132 kubelet[2620]: I0911 00:01:24.709125 2620 kubelet.go:386] "Adding apiserver pod source" Sep 11 00:01:24.709189 kubelet[2620]: I0911 00:01:24.709181 2620 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 00:01:24.713351 kubelet[2620]: I0911 00:01:24.710991 2620 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 11 00:01:24.713351 kubelet[2620]: I0911 00:01:24.711755 2620 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 11 00:01:24.716642 kubelet[2620]: I0911 00:01:24.716619 2620 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 00:01:24.716702 kubelet[2620]: I0911 00:01:24.716663 2620 server.go:1289] "Started kubelet" Sep 11 00:01:24.717742 kubelet[2620]: I0911 00:01:24.717721 2620 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 00:01:24.721187 kubelet[2620]: I0911 00:01:24.721156 2620 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 00:01:24.722102 kubelet[2620]: I0911 00:01:24.722082 2620 server.go:317] "Adding debug handlers to kubelet server" Sep 11 00:01:24.733432 kubelet[2620]: I0911 00:01:24.733395 2620 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 00:01:24.733555 kubelet[2620]: I0911 00:01:24.733508 2620 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 00:01:24.734399 kubelet[2620]: I0911 00:01:24.733624 2620 reconciler.go:26] "Reconciler: start to sync state" Sep 11 00:01:24.734399 kubelet[2620]: I0911 00:01:24.734267 2620 factory.go:223] Registration of the systemd container factory successfully Sep 11 00:01:24.734399 kubelet[2620]: I0911 00:01:24.734391 2620 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 00:01:24.734531 kubelet[2620]: I0911 00:01:24.734476 2620 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 00:01:24.734895 kubelet[2620]: I0911 00:01:24.734865 2620 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 00:01:24.739633 kubelet[2620]: I0911 00:01:24.739593 2620 factory.go:223] Registration of the containerd container factory successfully Sep 11 00:01:24.741878 kubelet[2620]: E0911 00:01:24.741835 2620 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 00:01:24.743812 kubelet[2620]: I0911 00:01:24.743774 2620 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 00:01:24.760545 kubelet[2620]: I0911 00:01:24.760292 2620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 11 00:01:24.762460 kubelet[2620]: I0911 00:01:24.762434 2620 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 11 00:01:24.762593 kubelet[2620]: I0911 00:01:24.762583 2620 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 11 00:01:24.762683 kubelet[2620]: I0911 00:01:24.762668 2620 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 00:01:24.762757 kubelet[2620]: I0911 00:01:24.762733 2620 kubelet.go:2436] "Starting kubelet main sync loop" Sep 11 00:01:24.762829 kubelet[2620]: E0911 00:01:24.762807 2620 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 00:01:24.782104 kubelet[2620]: I0911 00:01:24.782073 2620 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 00:01:24.782104 kubelet[2620]: I0911 00:01:24.782095 2620 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 00:01:24.782104 kubelet[2620]: I0911 00:01:24.782123 2620 state_mem.go:36] "Initialized new in-memory state store" Sep 11 00:01:24.782310 kubelet[2620]: I0911 00:01:24.782298 2620 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 00:01:24.782334 kubelet[2620]: I0911 00:01:24.782309 2620 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 00:01:24.782334 kubelet[2620]: I0911 00:01:24.782329 2620 policy_none.go:49] "None policy: Start" Sep 11 00:01:24.782423 kubelet[2620]: I0911 00:01:24.782339 2620 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 00:01:24.782423 kubelet[2620]: I0911 00:01:24.782402 2620 state_mem.go:35] "Initializing new in-memory state store" Sep 11 00:01:24.782525 kubelet[2620]: I0911 00:01:24.782509 2620 state_mem.go:75] "Updated machine memory state" Sep 11 00:01:24.787846 kubelet[2620]: E0911 00:01:24.787808 2620 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 11 00:01:24.788057 kubelet[2620]: I0911 00:01:24.788033 2620 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 00:01:24.788103 kubelet[2620]: I0911 00:01:24.788054 2620 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 00:01:24.788308 kubelet[2620]: I0911 00:01:24.788290 2620 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 00:01:24.791367 kubelet[2620]: E0911 00:01:24.790148 2620 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 00:01:24.863838 kubelet[2620]: I0911 00:01:24.863795 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:24.863989 kubelet[2620]: I0911 00:01:24.863865 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:24.864064 kubelet[2620]: I0911 00:01:24.864051 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:24.896792 kubelet[2620]: I0911 00:01:24.896753 2620 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 00:01:24.934915 kubelet[2620]: I0911 00:01:24.934874 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:24.934915 kubelet[2620]: I0911 00:01:24.934916 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:24.935088 kubelet[2620]: I0911 00:01:24.934940 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:24.935088 kubelet[2620]: I0911 00:01:24.934955 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:24.935088 kubelet[2620]: I0911 00:01:24.934971 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:24.935088 kubelet[2620]: I0911 00:01:24.934985 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:24.935088 kubelet[2620]: I0911 00:01:24.935001 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c68c2bf2bd1d9265b712bc34a533fcd2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c68c2bf2bd1d9265b712bc34a533fcd2\") " pod="kube-system/kube-apiserver-localhost" Sep 11 00:01:24.935200 kubelet[2620]: I0911 00:01:24.935015 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:24.935200 kubelet[2620]: I0911 00:01:24.935031 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:25.000366 kubelet[2620]: I0911 00:01:25.000157 2620 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 00:01:25.000366 kubelet[2620]: I0911 00:01:25.000258 2620 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 00:01:25.185134 kubelet[2620]: E0911 00:01:25.185100 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.233805 kubelet[2620]: E0911 00:01:25.233545 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.233805 kubelet[2620]: E0911 00:01:25.233724 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.711062 kubelet[2620]: I0911 00:01:25.710967 2620 apiserver.go:52] "Watching apiserver" Sep 11 00:01:25.733638 kubelet[2620]: I0911 00:01:25.733596 2620 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 00:01:25.776000 kubelet[2620]: E0911 00:01:25.775961 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.777003 kubelet[2620]: I0911 00:01:25.776976 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:25.778079 kubelet[2620]: I0911 00:01:25.778053 2620 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:25.786041 kubelet[2620]: E0911 00:01:25.785996 2620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 11 00:01:25.786171 kubelet[2620]: E0911 00:01:25.786162 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.786523 kubelet[2620]: E0911 00:01:25.785996 2620 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 11 00:01:25.786654 kubelet[2620]: E0911 00:01:25.786630 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:25.795656 kubelet[2620]: I0911 00:01:25.795589 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7955728629999999 podStartE2EDuration="1.795572863s" podCreationTimestamp="2025-09-11 00:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:01:25.795565989 +0000 UTC m=+1.138038062" watchObservedRunningTime="2025-09-11 00:01:25.795572863 +0000 UTC m=+1.138044896" Sep 11 00:01:25.812178 kubelet[2620]: I0911 00:01:25.812102 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.812083903 podStartE2EDuration="1.812083903s" podCreationTimestamp="2025-09-11 00:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:01:25.802921444 +0000 UTC m=+1.145393517" watchObservedRunningTime="2025-09-11 00:01:25.812083903 +0000 UTC m=+1.154555976" Sep 11 00:01:25.823117 kubelet[2620]: I0911 00:01:25.823030 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8230118050000002 podStartE2EDuration="1.823011805s" podCreationTimestamp="2025-09-11 00:01:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:01:25.812376904 +0000 UTC m=+1.154848977" watchObservedRunningTime="2025-09-11 00:01:25.823011805 +0000 UTC m=+1.165483878" Sep 11 00:01:26.777670 kubelet[2620]: E0911 00:01:26.777575 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:26.777670 kubelet[2620]: E0911 00:01:26.777587 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:26.778042 kubelet[2620]: E0911 00:01:26.777685 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:27.778638 kubelet[2620]: E0911 00:01:27.778593 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:27.779068 kubelet[2620]: E0911 00:01:27.778718 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:30.033811 kubelet[2620]: I0911 00:01:30.033779 2620 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 00:01:30.034765 containerd[1489]: time="2025-09-11T00:01:30.034634231Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 00:01:30.035364 kubelet[2620]: I0911 00:01:30.035181 2620 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 00:01:31.084692 systemd[1]: Created slice kubepods-besteffort-podfab93f41_4022_4d44_ab24_08c83c6e7d74.slice - libcontainer container kubepods-besteffort-podfab93f41_4022_4d44_ab24_08c83c6e7d74.slice. Sep 11 00:01:31.177010 kubelet[2620]: I0911 00:01:31.176944 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fab93f41-4022-4d44-ab24-08c83c6e7d74-kube-proxy\") pod \"kube-proxy-b7rj5\" (UID: \"fab93f41-4022-4d44-ab24-08c83c6e7d74\") " pod="kube-system/kube-proxy-b7rj5" Sep 11 00:01:31.177010 kubelet[2620]: I0911 00:01:31.176985 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fab93f41-4022-4d44-ab24-08c83c6e7d74-xtables-lock\") pod \"kube-proxy-b7rj5\" (UID: \"fab93f41-4022-4d44-ab24-08c83c6e7d74\") " pod="kube-system/kube-proxy-b7rj5" Sep 11 00:01:31.177010 kubelet[2620]: I0911 00:01:31.177006 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fab93f41-4022-4d44-ab24-08c83c6e7d74-lib-modules\") pod \"kube-proxy-b7rj5\" (UID: \"fab93f41-4022-4d44-ab24-08c83c6e7d74\") " pod="kube-system/kube-proxy-b7rj5" Sep 11 00:01:31.177010 kubelet[2620]: I0911 00:01:31.177023 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4779b\" (UniqueName: \"kubernetes.io/projected/fab93f41-4022-4d44-ab24-08c83c6e7d74-kube-api-access-4779b\") pod \"kube-proxy-b7rj5\" (UID: \"fab93f41-4022-4d44-ab24-08c83c6e7d74\") " pod="kube-system/kube-proxy-b7rj5" Sep 11 00:01:31.250270 systemd[1]: Created slice kubepods-besteffort-podd033eff9_1439_4a28_940f_0e8957961973.slice - libcontainer container kubepods-besteffort-podd033eff9_1439_4a28_940f_0e8957961973.slice. Sep 11 00:01:31.277720 kubelet[2620]: I0911 00:01:31.277675 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d033eff9-1439-4a28-940f-0e8957961973-var-lib-calico\") pod \"tigera-operator-755d956888-n2t8c\" (UID: \"d033eff9-1439-4a28-940f-0e8957961973\") " pod="tigera-operator/tigera-operator-755d956888-n2t8c" Sep 11 00:01:31.277822 kubelet[2620]: I0911 00:01:31.277742 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n7lg\" (UniqueName: \"kubernetes.io/projected/d033eff9-1439-4a28-940f-0e8957961973-kube-api-access-4n7lg\") pod \"tigera-operator-755d956888-n2t8c\" (UID: \"d033eff9-1439-4a28-940f-0e8957961973\") " pod="tigera-operator/tigera-operator-755d956888-n2t8c" Sep 11 00:01:31.401032 kubelet[2620]: E0911 00:01:31.400940 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:31.401821 containerd[1489]: time="2025-09-11T00:01:31.401573785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7rj5,Uid:fab93f41-4022-4d44-ab24-08c83c6e7d74,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:31.417506 containerd[1489]: time="2025-09-11T00:01:31.417421740Z" level=info msg="connecting to shim 8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799" address="unix:///run/containerd/s/28f5c8ac0cf426ebc9cf9be62d441a6b6383930310d14f225526f0e685e6ab62" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:31.448521 systemd[1]: Started cri-containerd-8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799.scope - libcontainer container 8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799. Sep 11 00:01:31.471084 containerd[1489]: time="2025-09-11T00:01:31.471047611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b7rj5,Uid:fab93f41-4022-4d44-ab24-08c83c6e7d74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799\"" Sep 11 00:01:31.471782 kubelet[2620]: E0911 00:01:31.471758 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:31.481014 containerd[1489]: time="2025-09-11T00:01:31.480982141Z" level=info msg="CreateContainer within sandbox \"8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 00:01:31.492788 containerd[1489]: time="2025-09-11T00:01:31.492751743Z" level=info msg="Container 23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:31.499186 containerd[1489]: time="2025-09-11T00:01:31.499136147Z" level=info msg="CreateContainer within sandbox \"8312d0beab4f214a3e19fbc759073815c96116214ebae6bcbd82f65612a9f799\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711\"" Sep 11 00:01:31.499642 containerd[1489]: time="2025-09-11T00:01:31.499605886Z" level=info msg="StartContainer for \"23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711\"" Sep 11 00:01:31.501257 containerd[1489]: time="2025-09-11T00:01:31.501222609Z" level=info msg="connecting to shim 23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711" address="unix:///run/containerd/s/28f5c8ac0cf426ebc9cf9be62d441a6b6383930310d14f225526f0e685e6ab62" protocol=ttrpc version=3 Sep 11 00:01:31.521552 systemd[1]: Started cri-containerd-23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711.scope - libcontainer container 23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711. Sep 11 00:01:31.554444 containerd[1489]: time="2025-09-11T00:01:31.554402464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-n2t8c,Uid:d033eff9-1439-4a28-940f-0e8957961973,Namespace:tigera-operator,Attempt:0,}" Sep 11 00:01:31.556751 containerd[1489]: time="2025-09-11T00:01:31.556723796Z" level=info msg="StartContainer for \"23fe7aceaf55025633fe1424777ed8770b3b94046cb827264c6be264fbc31711\" returns successfully" Sep 11 00:01:31.577560 containerd[1489]: time="2025-09-11T00:01:31.577513773Z" level=info msg="connecting to shim 60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113" address="unix:///run/containerd/s/8164f9876e6550cb0cd485e49125417b5b758d96a9a94d0df2415a9a88e572c4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:31.611517 systemd[1]: Started cri-containerd-60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113.scope - libcontainer container 60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113. Sep 11 00:01:31.650423 containerd[1489]: time="2025-09-11T00:01:31.650162959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-n2t8c,Uid:d033eff9-1439-4a28-940f-0e8957961973,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113\"" Sep 11 00:01:31.651880 containerd[1489]: time="2025-09-11T00:01:31.651505688Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 11 00:01:31.789189 kubelet[2620]: E0911 00:01:31.787780 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:31.800451 kubelet[2620]: I0911 00:01:31.800190 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b7rj5" podStartSLOduration=0.800176683 podStartE2EDuration="800.176683ms" podCreationTimestamp="2025-09-11 00:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:01:31.798596884 +0000 UTC m=+7.141068997" watchObservedRunningTime="2025-09-11 00:01:31.800176683 +0000 UTC m=+7.142648716" Sep 11 00:01:32.291028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1927726577.mount: Deactivated successfully. Sep 11 00:01:33.181327 kubelet[2620]: E0911 00:01:33.181192 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:33.568909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2388907193.mount: Deactivated successfully. Sep 11 00:01:33.791466 kubelet[2620]: E0911 00:01:33.791426 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:34.792856 kubelet[2620]: E0911 00:01:34.792816 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:35.632431 containerd[1489]: time="2025-09-11T00:01:35.632378589Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:35.632979 containerd[1489]: time="2025-09-11T00:01:35.632947207Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 11 00:01:35.633730 containerd[1489]: time="2025-09-11T00:01:35.633696722Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:35.635678 containerd[1489]: time="2025-09-11T00:01:35.635648118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:35.636286 containerd[1489]: time="2025-09-11T00:01:35.636252979Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.984721609s" Sep 11 00:01:35.636286 containerd[1489]: time="2025-09-11T00:01:35.636284983Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 11 00:01:35.646585 containerd[1489]: time="2025-09-11T00:01:35.646532774Z" level=info msg="CreateContainer within sandbox \"60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 11 00:01:35.651393 containerd[1489]: time="2025-09-11T00:01:35.651170480Z" level=info msg="Container e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:35.656689 containerd[1489]: time="2025-09-11T00:01:35.656656192Z" level=info msg="CreateContainer within sandbox \"60beb17116a9aeb21f93b8adec994415355a2f7c748b0351a5efda8d4185a113\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8\"" Sep 11 00:01:35.657288 containerd[1489]: time="2025-09-11T00:01:35.657253932Z" level=info msg="StartContainer for \"e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8\"" Sep 11 00:01:35.658002 containerd[1489]: time="2025-09-11T00:01:35.657978725Z" level=info msg="connecting to shim e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8" address="unix:///run/containerd/s/8164f9876e6550cb0cd485e49125417b5b758d96a9a94d0df2415a9a88e572c4" protocol=ttrpc version=3 Sep 11 00:01:35.679493 systemd[1]: Started cri-containerd-e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8.scope - libcontainer container e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8. Sep 11 00:01:35.707617 containerd[1489]: time="2025-09-11T00:01:35.707586517Z" level=info msg="StartContainer for \"e575e7e8d16446d9bc5e275c595fa06d4b3eae1cd7b43dc1cd3ab057d20294b8\" returns successfully" Sep 11 00:01:35.817596 kubelet[2620]: I0911 00:01:35.817532 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-n2t8c" podStartSLOduration=0.830017241 podStartE2EDuration="4.817518657s" podCreationTimestamp="2025-09-11 00:01:31 +0000 UTC" firstStartedPulling="2025-09-11 00:01:31.651223732 +0000 UTC m=+6.993695805" lastFinishedPulling="2025-09-11 00:01:35.638725188 +0000 UTC m=+10.981197221" observedRunningTime="2025-09-11 00:01:35.817481174 +0000 UTC m=+11.159953247" watchObservedRunningTime="2025-09-11 00:01:35.817518657 +0000 UTC m=+11.159990730" Sep 11 00:01:37.437422 kubelet[2620]: E0911 00:01:37.437018 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:37.618703 kubelet[2620]: E0911 00:01:37.616972 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:40.769665 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 11 00:01:40.771662 sshd[1697]: Connection closed by 10.0.0.1 port 33948 Sep 11 00:01:40.772087 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 11 00:01:40.775969 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:33948.service: Deactivated successfully. Sep 11 00:01:40.779891 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 00:01:40.780118 systemd[1]: session-7.scope: Consumed 5.294s CPU time, 220.7M memory peak. Sep 11 00:01:40.781081 systemd-logind[1467]: Session 7 logged out. Waiting for processes to exit. Sep 11 00:01:40.783276 systemd-logind[1467]: Removed session 7. Sep 11 00:01:42.541472 update_engine[1469]: I20250911 00:01:42.541394 1469 update_attempter.cc:509] Updating boot flags... Sep 11 00:01:47.416092 systemd[1]: Created slice kubepods-besteffort-pod0f690a5c_c314_47b2_8588_e2e9c1a6259e.slice - libcontainer container kubepods-besteffort-pod0f690a5c_c314_47b2_8588_e2e9c1a6259e.slice. Sep 11 00:01:47.483764 kubelet[2620]: I0911 00:01:47.483712 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f690a5c-c314-47b2-8588-e2e9c1a6259e-tigera-ca-bundle\") pod \"calico-typha-76d69cfdc4-bg5kz\" (UID: \"0f690a5c-c314-47b2-8588-e2e9c1a6259e\") " pod="calico-system/calico-typha-76d69cfdc4-bg5kz" Sep 11 00:01:47.483764 kubelet[2620]: I0911 00:01:47.483770 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0f690a5c-c314-47b2-8588-e2e9c1a6259e-typha-certs\") pod \"calico-typha-76d69cfdc4-bg5kz\" (UID: \"0f690a5c-c314-47b2-8588-e2e9c1a6259e\") " pod="calico-system/calico-typha-76d69cfdc4-bg5kz" Sep 11 00:01:47.484161 kubelet[2620]: I0911 00:01:47.483793 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npn58\" (UniqueName: \"kubernetes.io/projected/0f690a5c-c314-47b2-8588-e2e9c1a6259e-kube-api-access-npn58\") pod \"calico-typha-76d69cfdc4-bg5kz\" (UID: \"0f690a5c-c314-47b2-8588-e2e9c1a6259e\") " pod="calico-system/calico-typha-76d69cfdc4-bg5kz" Sep 11 00:01:47.651219 systemd[1]: Created slice kubepods-besteffort-podf0cae739_c097_4166_b3b0_ba9abb6a447b.slice - libcontainer container kubepods-besteffort-podf0cae739_c097_4166_b3b0_ba9abb6a447b.slice. Sep 11 00:01:47.685376 kubelet[2620]: I0911 00:01:47.685172 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-cni-net-dir\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685376 kubelet[2620]: I0911 00:01:47.685216 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-policysync\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685376 kubelet[2620]: I0911 00:01:47.685235 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-cni-log-dir\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685376 kubelet[2620]: I0911 00:01:47.685251 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-flexvol-driver-host\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685376 kubelet[2620]: I0911 00:01:47.685274 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0cae739-c097-4166-b3b0-ba9abb6a447b-tigera-ca-bundle\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685597 kubelet[2620]: I0911 00:01:47.685290 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-var-run-calico\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685597 kubelet[2620]: I0911 00:01:47.685305 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-cni-bin-dir\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685597 kubelet[2620]: I0911 00:01:47.685318 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-lib-modules\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.685597 kubelet[2620]: I0911 00:01:47.685333 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-xtables-lock\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.686989 kubelet[2620]: I0911 00:01:47.686883 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f0cae739-c097-4166-b3b0-ba9abb6a447b-node-certs\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.686989 kubelet[2620]: I0911 00:01:47.686915 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvrtg\" (UniqueName: \"kubernetes.io/projected/f0cae739-c097-4166-b3b0-ba9abb6a447b-kube-api-access-fvrtg\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.686989 kubelet[2620]: I0911 00:01:47.686937 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f0cae739-c097-4166-b3b0-ba9abb6a447b-var-lib-calico\") pod \"calico-node-fb6v6\" (UID: \"f0cae739-c097-4166-b3b0-ba9abb6a447b\") " pod="calico-system/calico-node-fb6v6" Sep 11 00:01:47.719265 kubelet[2620]: E0911 00:01:47.719230 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:47.719944 containerd[1489]: time="2025-09-11T00:01:47.719904409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d69cfdc4-bg5kz,Uid:0f690a5c-c314-47b2-8588-e2e9c1a6259e,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:47.798527 kubelet[2620]: E0911 00:01:47.798444 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.798527 kubelet[2620]: W0911 00:01:47.798472 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.803855 kubelet[2620]: E0911 00:01:47.803697 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.814272 kubelet[2620]: E0911 00:01:47.814205 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.814488 kubelet[2620]: W0911 00:01:47.814413 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.814488 kubelet[2620]: E0911 00:01:47.814456 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.820202 containerd[1489]: time="2025-09-11T00:01:47.819804680Z" level=info msg="connecting to shim b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119" address="unix:///run/containerd/s/153734edf207a06f8f3540f1aba7d8a388b62bfe4ada85dc88a4cb1a6f8b674c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:47.890334 kubelet[2620]: E0911 00:01:47.890295 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9nz7" podUID="3328d16f-81a3-4c76-9945-1acaf3146893" Sep 11 00:01:47.891544 systemd[1]: Started cri-containerd-b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119.scope - libcontainer container b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119. Sep 11 00:01:47.953849 containerd[1489]: time="2025-09-11T00:01:47.953747922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fb6v6,Uid:f0cae739-c097-4166-b3b0-ba9abb6a447b,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:47.972739 kubelet[2620]: E0911 00:01:47.972692 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.972999 kubelet[2620]: W0911 00:01:47.972716 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.972999 kubelet[2620]: E0911 00:01:47.972839 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.973281 kubelet[2620]: E0911 00:01:47.973265 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.973497 kubelet[2620]: W0911 00:01:47.973314 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.973497 kubelet[2620]: E0911 00:01:47.973370 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.973801 kubelet[2620]: E0911 00:01:47.973761 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.974004 kubelet[2620]: W0911 00:01:47.973787 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.974004 kubelet[2620]: E0911 00:01:47.973954 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.975490 kubelet[2620]: E0911 00:01:47.975093 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.975490 kubelet[2620]: W0911 00:01:47.975320 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.975490 kubelet[2620]: E0911 00:01:47.975356 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.976486 kubelet[2620]: E0911 00:01:47.976333 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.976486 kubelet[2620]: W0911 00:01:47.976377 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.976486 kubelet[2620]: E0911 00:01:47.976391 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.976662 kubelet[2620]: E0911 00:01:47.976650 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.976721 kubelet[2620]: W0911 00:01:47.976710 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.976836 kubelet[2620]: E0911 00:01:47.976823 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.977536 kubelet[2620]: E0911 00:01:47.977369 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.977536 kubelet[2620]: W0911 00:01:47.977384 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.977536 kubelet[2620]: E0911 00:01:47.977401 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.978167 kubelet[2620]: E0911 00:01:47.978098 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.978167 kubelet[2620]: W0911 00:01:47.978111 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.978167 kubelet[2620]: E0911 00:01:47.978124 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.978651 kubelet[2620]: E0911 00:01:47.978635 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.978720 containerd[1489]: time="2025-09-11T00:01:47.978662156Z" level=info msg="connecting to shim 689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd" address="unix:///run/containerd/s/f62426b488ebbddf386fd4da75624a08a4f38fa90750a70cdb70f8bac024f135" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:01:47.978866 kubelet[2620]: W0911 00:01:47.978764 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.978866 kubelet[2620]: E0911 00:01:47.978790 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.979587 kubelet[2620]: E0911 00:01:47.979411 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.979587 kubelet[2620]: W0911 00:01:47.979429 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.979587 kubelet[2620]: E0911 00:01:47.979441 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.980306 kubelet[2620]: E0911 00:01:47.980287 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.980537 kubelet[2620]: W0911 00:01:47.980407 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.980537 kubelet[2620]: E0911 00:01:47.980427 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.981427 kubelet[2620]: E0911 00:01:47.981404 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.981667 kubelet[2620]: W0911 00:01:47.981524 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.981667 kubelet[2620]: E0911 00:01:47.981540 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.981803 kubelet[2620]: E0911 00:01:47.981789 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.981866 kubelet[2620]: W0911 00:01:47.981855 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.981917 kubelet[2620]: E0911 00:01:47.981907 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.982117 kubelet[2620]: E0911 00:01:47.982103 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.982186 kubelet[2620]: W0911 00:01:47.982175 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.982237 kubelet[2620]: E0911 00:01:47.982227 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.982499 kubelet[2620]: E0911 00:01:47.982414 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.982499 kubelet[2620]: W0911 00:01:47.982425 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.982499 kubelet[2620]: E0911 00:01:47.982435 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.982658 kubelet[2620]: E0911 00:01:47.982647 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.982824 kubelet[2620]: W0911 00:01:47.982703 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.982824 kubelet[2620]: E0911 00:01:47.982716 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.982956 kubelet[2620]: E0911 00:01:47.982944 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.983007 kubelet[2620]: W0911 00:01:47.982996 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.983061 kubelet[2620]: E0911 00:01:47.983051 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.983279 kubelet[2620]: E0911 00:01:47.983267 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.983445 kubelet[2620]: W0911 00:01:47.983339 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.983445 kubelet[2620]: E0911 00:01:47.983363 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.983574 kubelet[2620]: E0911 00:01:47.983561 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.983629 kubelet[2620]: W0911 00:01:47.983619 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.983682 kubelet[2620]: E0911 00:01:47.983672 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.983955 kubelet[2620]: E0911 00:01:47.983862 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.983955 kubelet[2620]: W0911 00:01:47.983875 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.983955 kubelet[2620]: E0911 00:01:47.983885 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.988858 kubelet[2620]: E0911 00:01:47.988844 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.988945 kubelet[2620]: W0911 00:01:47.988933 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.989000 kubelet[2620]: E0911 00:01:47.988990 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.989075 kubelet[2620]: I0911 00:01:47.989063 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3328d16f-81a3-4c76-9945-1acaf3146893-socket-dir\") pod \"csi-node-driver-c9nz7\" (UID: \"3328d16f-81a3-4c76-9945-1acaf3146893\") " pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:47.989325 kubelet[2620]: E0911 00:01:47.989311 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.989415 kubelet[2620]: W0911 00:01:47.989402 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.989485 kubelet[2620]: E0911 00:01:47.989473 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.989554 kubelet[2620]: I0911 00:01:47.989542 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3328d16f-81a3-4c76-9945-1acaf3146893-varrun\") pod \"csi-node-driver-c9nz7\" (UID: \"3328d16f-81a3-4c76-9945-1acaf3146893\") " pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:47.989768 kubelet[2620]: E0911 00:01:47.989759 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.989850 kubelet[2620]: W0911 00:01:47.989837 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.989897 kubelet[2620]: E0911 00:01:47.989888 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.990230 kubelet[2620]: E0911 00:01:47.990150 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.990230 kubelet[2620]: W0911 00:01:47.990162 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.990230 kubelet[2620]: E0911 00:01:47.990171 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.990650 kubelet[2620]: E0911 00:01:47.990570 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.990650 kubelet[2620]: W0911 00:01:47.990582 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.990650 kubelet[2620]: E0911 00:01:47.990593 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.991040 kubelet[2620]: E0911 00:01:47.991006 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.991040 kubelet[2620]: W0911 00:01:47.991019 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.991040 kubelet[2620]: E0911 00:01:47.991029 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.991630 kubelet[2620]: E0911 00:01:47.991484 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.991630 kubelet[2620]: W0911 00:01:47.991498 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.991630 kubelet[2620]: E0911 00:01:47.991510 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.991630 kubelet[2620]: I0911 00:01:47.991531 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3328d16f-81a3-4c76-9945-1acaf3146893-registration-dir\") pod \"csi-node-driver-c9nz7\" (UID: \"3328d16f-81a3-4c76-9945-1acaf3146893\") " pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:47.991846 kubelet[2620]: E0911 00:01:47.991808 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.991940 kubelet[2620]: W0911 00:01:47.991881 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.991940 kubelet[2620]: E0911 00:01:47.991895 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.992535 kubelet[2620]: I0911 00:01:47.992474 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3328d16f-81a3-4c76-9945-1acaf3146893-kubelet-dir\") pod \"csi-node-driver-c9nz7\" (UID: \"3328d16f-81a3-4c76-9945-1acaf3146893\") " pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:47.992843 kubelet[2620]: E0911 00:01:47.992767 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.992843 kubelet[2620]: W0911 00:01:47.992791 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.992843 kubelet[2620]: E0911 00:01:47.992803 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.993425 kubelet[2620]: E0911 00:01:47.993171 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.993425 kubelet[2620]: W0911 00:01:47.993313 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.993425 kubelet[2620]: E0911 00:01:47.993325 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.993763 kubelet[2620]: E0911 00:01:47.993746 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.993949 kubelet[2620]: W0911 00:01:47.993848 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.993949 kubelet[2620]: E0911 00:01:47.993864 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.994471 kubelet[2620]: E0911 00:01:47.994433 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.994471 kubelet[2620]: W0911 00:01:47.994447 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.994471 kubelet[2620]: E0911 00:01:47.994458 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.994945 kubelet[2620]: E0911 00:01:47.994933 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.995022 kubelet[2620]: W0911 00:01:47.995011 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.995158 kubelet[2620]: E0911 00:01:47.995068 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.995158 kubelet[2620]: I0911 00:01:47.995087 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2jr2\" (UniqueName: \"kubernetes.io/projected/3328d16f-81a3-4c76-9945-1acaf3146893-kube-api-access-g2jr2\") pod \"csi-node-driver-c9nz7\" (UID: \"3328d16f-81a3-4c76-9945-1acaf3146893\") " pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:47.995549 kubelet[2620]: E0911 00:01:47.995473 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.995549 kubelet[2620]: W0911 00:01:47.995503 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.995549 kubelet[2620]: E0911 00:01:47.995514 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:47.995833 kubelet[2620]: E0911 00:01:47.995822 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:47.995918 kubelet[2620]: W0911 00:01:47.995885 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:47.995918 kubelet[2620]: E0911 00:01:47.995899 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.025604 containerd[1489]: time="2025-09-11T00:01:48.025557404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76d69cfdc4-bg5kz,Uid:0f690a5c-c314-47b2-8588-e2e9c1a6259e,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119\"" Sep 11 00:01:48.031656 kubelet[2620]: E0911 00:01:48.031588 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:48.033557 containerd[1489]: time="2025-09-11T00:01:48.033291485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 11 00:01:48.036533 systemd[1]: Started cri-containerd-689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd.scope - libcontainer container 689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd. Sep 11 00:01:48.087929 containerd[1489]: time="2025-09-11T00:01:48.087889318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fb6v6,Uid:f0cae739-c097-4166-b3b0-ba9abb6a447b,Namespace:calico-system,Attempt:0,} returns sandbox id \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\"" Sep 11 00:01:48.095667 kubelet[2620]: E0911 00:01:48.095642 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.095667 kubelet[2620]: W0911 00:01:48.095662 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.095821 kubelet[2620]: E0911 00:01:48.095681 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.095909 kubelet[2620]: E0911 00:01:48.095892 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.095909 kubelet[2620]: W0911 00:01:48.095904 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.096199 kubelet[2620]: E0911 00:01:48.095912 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.096199 kubelet[2620]: E0911 00:01:48.096107 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.096199 kubelet[2620]: W0911 00:01:48.096123 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.096199 kubelet[2620]: E0911 00:01:48.096137 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.096624 kubelet[2620]: E0911 00:01:48.096513 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.096624 kubelet[2620]: W0911 00:01:48.096527 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.096624 kubelet[2620]: E0911 00:01:48.096538 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.096765 kubelet[2620]: E0911 00:01:48.096755 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.096847 kubelet[2620]: W0911 00:01:48.096835 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.096955 kubelet[2620]: E0911 00:01:48.096942 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.097492 kubelet[2620]: E0911 00:01:48.097471 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.097492 kubelet[2620]: W0911 00:01:48.097487 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.097615 kubelet[2620]: E0911 00:01:48.097500 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.097772 kubelet[2620]: E0911 00:01:48.097749 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.097772 kubelet[2620]: W0911 00:01:48.097765 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.098008 kubelet[2620]: E0911 00:01:48.097787 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.098008 kubelet[2620]: E0911 00:01:48.097998 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.098088 kubelet[2620]: W0911 00:01:48.098009 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.098088 kubelet[2620]: E0911 00:01:48.098019 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.098166 kubelet[2620]: E0911 00:01:48.098147 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.098166 kubelet[2620]: W0911 00:01:48.098154 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.098166 kubelet[2620]: E0911 00:01:48.098160 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.098374 kubelet[2620]: E0911 00:01:48.098337 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.098374 kubelet[2620]: W0911 00:01:48.098353 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.098374 kubelet[2620]: E0911 00:01:48.098362 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.098551 kubelet[2620]: E0911 00:01:48.098539 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.098551 kubelet[2620]: W0911 00:01:48.098550 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.098628 kubelet[2620]: E0911 00:01:48.098559 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.099164 kubelet[2620]: E0911 00:01:48.099145 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.099164 kubelet[2620]: W0911 00:01:48.099161 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.099546 kubelet[2620]: E0911 00:01:48.099173 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.099546 kubelet[2620]: E0911 00:01:48.099379 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.099546 kubelet[2620]: W0911 00:01:48.099391 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.099546 kubelet[2620]: E0911 00:01:48.099403 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.099791 kubelet[2620]: E0911 00:01:48.099570 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.099791 kubelet[2620]: W0911 00:01:48.099584 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.099791 kubelet[2620]: E0911 00:01:48.099593 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.100011 kubelet[2620]: E0911 00:01:48.099995 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.100011 kubelet[2620]: W0911 00:01:48.100009 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.100118 kubelet[2620]: E0911 00:01:48.100023 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.100271 kubelet[2620]: E0911 00:01:48.100242 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.100271 kubelet[2620]: W0911 00:01:48.100262 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.100329 kubelet[2620]: E0911 00:01:48.100274 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.100633 kubelet[2620]: E0911 00:01:48.100612 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.100633 kubelet[2620]: W0911 00:01:48.100627 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101009 kubelet[2620]: E0911 00:01:48.100638 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101009 kubelet[2620]: E0911 00:01:48.100805 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101009 kubelet[2620]: W0911 00:01:48.100814 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101009 kubelet[2620]: E0911 00:01:48.100823 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101009 kubelet[2620]: E0911 00:01:48.100979 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101009 kubelet[2620]: W0911 00:01:48.100988 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101009 kubelet[2620]: E0911 00:01:48.100995 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101149 kubelet[2620]: E0911 00:01:48.101128 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101149 kubelet[2620]: W0911 00:01:48.101136 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101149 kubelet[2620]: E0911 00:01:48.101143 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101380 kubelet[2620]: E0911 00:01:48.101266 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101380 kubelet[2620]: W0911 00:01:48.101275 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101380 kubelet[2620]: E0911 00:01:48.101282 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101722 kubelet[2620]: E0911 00:01:48.101431 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101722 kubelet[2620]: W0911 00:01:48.101440 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101722 kubelet[2620]: E0911 00:01:48.101447 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.101928 kubelet[2620]: E0911 00:01:48.101817 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.101928 kubelet[2620]: W0911 00:01:48.101834 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.101928 kubelet[2620]: E0911 00:01:48.101847 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.102160 kubelet[2620]: E0911 00:01:48.102068 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.102160 kubelet[2620]: W0911 00:01:48.102079 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.102160 kubelet[2620]: E0911 00:01:48.102088 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.102417 kubelet[2620]: E0911 00:01:48.102403 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.102528 kubelet[2620]: W0911 00:01:48.102485 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.102528 kubelet[2620]: E0911 00:01:48.102504 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:48.121461 kubelet[2620]: E0911 00:01:48.121433 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:48.121461 kubelet[2620]: W0911 00:01:48.121453 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:48.121546 kubelet[2620]: E0911 00:01:48.121471 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.077495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702182093.mount: Deactivated successfully. Sep 11 00:01:49.641432 containerd[1489]: time="2025-09-11T00:01:49.641379449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:49.642446 containerd[1489]: time="2025-09-11T00:01:49.642418941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 11 00:01:49.643360 containerd[1489]: time="2025-09-11T00:01:49.643311465Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:49.645434 containerd[1489]: time="2025-09-11T00:01:49.645406809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:49.646474 containerd[1489]: time="2025-09-11T00:01:49.646032600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.612476621s" Sep 11 00:01:49.646474 containerd[1489]: time="2025-09-11T00:01:49.646068842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 11 00:01:49.647549 containerd[1489]: time="2025-09-11T00:01:49.647522234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 11 00:01:49.656325 containerd[1489]: time="2025-09-11T00:01:49.656287028Z" level=info msg="CreateContainer within sandbox \"b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 11 00:01:49.661701 containerd[1489]: time="2025-09-11T00:01:49.661670615Z" level=info msg="Container 515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:49.670598 containerd[1489]: time="2025-09-11T00:01:49.670553016Z" level=info msg="CreateContainer within sandbox \"b8a8f9f9b9befbff584fae1111b24ae0eeabc126148dde8cc5e5c2def24c3119\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d\"" Sep 11 00:01:49.671237 containerd[1489]: time="2025-09-11T00:01:49.670971996Z" level=info msg="StartContainer for \"515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d\"" Sep 11 00:01:49.672204 containerd[1489]: time="2025-09-11T00:01:49.672174616Z" level=info msg="connecting to shim 515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d" address="unix:///run/containerd/s/153734edf207a06f8f3540f1aba7d8a388b62bfe4ada85dc88a4cb1a6f8b674c" protocol=ttrpc version=3 Sep 11 00:01:49.696505 systemd[1]: Started cri-containerd-515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d.scope - libcontainer container 515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d. Sep 11 00:01:49.731786 containerd[1489]: time="2025-09-11T00:01:49.731751569Z" level=info msg="StartContainer for \"515e242986bba1304a58e770b13f7a2d01fab4ef454ebe5198598a39e0e5782d\" returns successfully" Sep 11 00:01:49.764042 kubelet[2620]: E0911 00:01:49.764002 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9nz7" podUID="3328d16f-81a3-4c76-9945-1acaf3146893" Sep 11 00:01:49.833763 kubelet[2620]: E0911 00:01:49.833728 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:49.849776 kubelet[2620]: I0911 00:01:49.849690 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76d69cfdc4-bg5kz" podStartSLOduration=1.2353065380000001 podStartE2EDuration="2.849674775s" podCreationTimestamp="2025-09-11 00:01:47 +0000 UTC" firstStartedPulling="2025-09-11 00:01:48.03241988 +0000 UTC m=+23.374891913" lastFinishedPulling="2025-09-11 00:01:49.646788077 +0000 UTC m=+24.989260150" observedRunningTime="2025-09-11 00:01:49.849240594 +0000 UTC m=+25.191712667" watchObservedRunningTime="2025-09-11 00:01:49.849674775 +0000 UTC m=+25.192146808" Sep 11 00:01:49.897911 kubelet[2620]: E0911 00:01:49.897722 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.897911 kubelet[2620]: W0911 00:01:49.897748 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.897911 kubelet[2620]: E0911 00:01:49.897770 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.898424 kubelet[2620]: E0911 00:01:49.898013 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.898424 kubelet[2620]: W0911 00:01:49.898024 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.898424 kubelet[2620]: E0911 00:01:49.898066 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.898424 kubelet[2620]: E0911 00:01:49.898276 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.898424 kubelet[2620]: W0911 00:01:49.898287 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.898424 kubelet[2620]: E0911 00:01:49.898299 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.899598 kubelet[2620]: E0911 00:01:49.899570 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.899598 kubelet[2620]: W0911 00:01:49.899591 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.899598 kubelet[2620]: E0911 00:01:49.899604 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.899834 kubelet[2620]: E0911 00:01:49.899788 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.899834 kubelet[2620]: W0911 00:01:49.899804 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.899834 kubelet[2620]: E0911 00:01:49.899812 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.900217 kubelet[2620]: E0911 00:01:49.900006 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.900217 kubelet[2620]: W0911 00:01:49.900019 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.900217 kubelet[2620]: E0911 00:01:49.900029 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.900217 kubelet[2620]: E0911 00:01:49.900174 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.900217 kubelet[2620]: W0911 00:01:49.900181 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.900217 kubelet[2620]: E0911 00:01:49.900189 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.900565 kubelet[2620]: E0911 00:01:49.900306 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.900565 kubelet[2620]: W0911 00:01:49.900312 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.900565 kubelet[2620]: E0911 00:01:49.900320 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.901456 kubelet[2620]: E0911 00:01:49.901429 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.901456 kubelet[2620]: W0911 00:01:49.901449 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.901456 kubelet[2620]: E0911 00:01:49.901462 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.901667 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.902333 kubelet[2620]: W0911 00:01:49.901683 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.901693 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.901847 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.902333 kubelet[2620]: W0911 00:01:49.901855 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.901873 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.902030 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.902333 kubelet[2620]: W0911 00:01:49.902038 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.902333 kubelet[2620]: E0911 00:01:49.902046 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.902655 kubelet[2620]: E0911 00:01:49.902635 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.902655 kubelet[2620]: W0911 00:01:49.902647 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.902715 kubelet[2620]: E0911 00:01:49.902660 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.902896 kubelet[2620]: E0911 00:01:49.902857 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.902896 kubelet[2620]: W0911 00:01:49.902894 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.902969 kubelet[2620]: E0911 00:01:49.902906 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.903377 kubelet[2620]: E0911 00:01:49.903299 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.903377 kubelet[2620]: W0911 00:01:49.903318 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.903377 kubelet[2620]: E0911 00:01:49.903329 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.915364 kubelet[2620]: E0911 00:01:49.915316 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.915364 kubelet[2620]: W0911 00:01:49.915354 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.915364 kubelet[2620]: E0911 00:01:49.915373 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.915722 kubelet[2620]: E0911 00:01:49.915699 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.915722 kubelet[2620]: W0911 00:01:49.915714 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.915722 kubelet[2620]: E0911 00:01:49.915725 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.916520 kubelet[2620]: E0911 00:01:49.916499 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.916520 kubelet[2620]: W0911 00:01:49.916516 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.916622 kubelet[2620]: E0911 00:01:49.916528 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.916719 kubelet[2620]: E0911 00:01:49.916692 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.916719 kubelet[2620]: W0911 00:01:49.916703 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.916719 kubelet[2620]: E0911 00:01:49.916712 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.916950 kubelet[2620]: E0911 00:01:49.916933 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.916950 kubelet[2620]: W0911 00:01:49.916948 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.917020 kubelet[2620]: E0911 00:01:49.916959 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.917766 kubelet[2620]: E0911 00:01:49.917752 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.917766 kubelet[2620]: W0911 00:01:49.917766 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.917829 kubelet[2620]: E0911 00:01:49.917778 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.917991 kubelet[2620]: E0911 00:01:49.917980 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.917991 kubelet[2620]: W0911 00:01:49.917991 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.918054 kubelet[2620]: E0911 00:01:49.918001 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.918172 kubelet[2620]: E0911 00:01:49.918161 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.918202 kubelet[2620]: W0911 00:01:49.918172 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.918202 kubelet[2620]: E0911 00:01:49.918180 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.918473 kubelet[2620]: E0911 00:01:49.918454 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.918506 kubelet[2620]: W0911 00:01:49.918473 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.918549 kubelet[2620]: E0911 00:01:49.918532 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.918750 kubelet[2620]: E0911 00:01:49.918735 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.918750 kubelet[2620]: W0911 00:01:49.918749 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.918809 kubelet[2620]: E0911 00:01:49.918760 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.919305 kubelet[2620]: E0911 00:01:49.919287 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.919305 kubelet[2620]: W0911 00:01:49.919303 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.919406 kubelet[2620]: E0911 00:01:49.919316 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.919762 kubelet[2620]: E0911 00:01:49.919635 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.919762 kubelet[2620]: W0911 00:01:49.919650 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.919762 kubelet[2620]: E0911 00:01:49.919662 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.920545 kubelet[2620]: E0911 00:01:49.920523 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.920763 kubelet[2620]: W0911 00:01:49.920628 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.920763 kubelet[2620]: E0911 00:01:49.920656 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.921473 kubelet[2620]: E0911 00:01:49.921436 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.921565 kubelet[2620]: W0911 00:01:49.921546 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.921651 kubelet[2620]: E0911 00:01:49.921638 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.921982 kubelet[2620]: E0911 00:01:49.921923 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.921982 kubelet[2620]: W0911 00:01:49.921935 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.921982 kubelet[2620]: E0911 00:01:49.921945 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.922333 kubelet[2620]: E0911 00:01:49.922313 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.922333 kubelet[2620]: W0911 00:01:49.922332 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.922412 kubelet[2620]: E0911 00:01:49.922352 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.922522 kubelet[2620]: E0911 00:01:49.922510 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.922522 kubelet[2620]: W0911 00:01:49.922520 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.922589 kubelet[2620]: E0911 00:01:49.922529 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:49.923293 kubelet[2620]: E0911 00:01:49.923201 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:49.923293 kubelet[2620]: W0911 00:01:49.923217 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:49.923293 kubelet[2620]: E0911 00:01:49.923229 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.834892 kubelet[2620]: I0911 00:01:50.834865 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:01:50.835323 kubelet[2620]: E0911 00:01:50.835250 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:50.907786 kubelet[2620]: E0911 00:01:50.907756 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.907786 kubelet[2620]: W0911 00:01:50.907780 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.907947 kubelet[2620]: E0911 00:01:50.907799 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908028 kubelet[2620]: E0911 00:01:50.908011 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908028 kubelet[2620]: W0911 00:01:50.908025 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908113 kubelet[2620]: E0911 00:01:50.908034 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908217 kubelet[2620]: E0911 00:01:50.908206 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908217 kubelet[2620]: W0911 00:01:50.908216 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908266 kubelet[2620]: E0911 00:01:50.908224 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908388 kubelet[2620]: E0911 00:01:50.908376 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908388 kubelet[2620]: W0911 00:01:50.908387 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908464 kubelet[2620]: E0911 00:01:50.908395 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908544 kubelet[2620]: E0911 00:01:50.908531 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908544 kubelet[2620]: W0911 00:01:50.908542 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908651 kubelet[2620]: E0911 00:01:50.908550 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908689 kubelet[2620]: E0911 00:01:50.908677 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908710 kubelet[2620]: W0911 00:01:50.908691 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908710 kubelet[2620]: E0911 00:01:50.908699 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.908837 kubelet[2620]: E0911 00:01:50.908825 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.908837 kubelet[2620]: W0911 00:01:50.908836 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.908884 kubelet[2620]: E0911 00:01:50.908843 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909001 kubelet[2620]: E0911 00:01:50.908977 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909001 kubelet[2620]: W0911 00:01:50.908988 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909001 kubelet[2620]: E0911 00:01:50.908995 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909143 kubelet[2620]: E0911 00:01:50.909128 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909143 kubelet[2620]: W0911 00:01:50.909139 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909188 kubelet[2620]: E0911 00:01:50.909146 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909308 kubelet[2620]: E0911 00:01:50.909295 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909308 kubelet[2620]: W0911 00:01:50.909306 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909413 kubelet[2620]: E0911 00:01:50.909316 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909517 kubelet[2620]: E0911 00:01:50.909458 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909517 kubelet[2620]: W0911 00:01:50.909466 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909517 kubelet[2620]: E0911 00:01:50.909473 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909602 kubelet[2620]: E0911 00:01:50.909596 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909624 kubelet[2620]: W0911 00:01:50.909603 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909624 kubelet[2620]: E0911 00:01:50.909609 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909758 kubelet[2620]: E0911 00:01:50.909746 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909758 kubelet[2620]: W0911 00:01:50.909756 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909812 kubelet[2620]: E0911 00:01:50.909764 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.909891 kubelet[2620]: E0911 00:01:50.909880 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.909891 kubelet[2620]: W0911 00:01:50.909889 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.909962 kubelet[2620]: E0911 00:01:50.909896 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.910049 kubelet[2620]: E0911 00:01:50.910036 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.910049 kubelet[2620]: W0911 00:01:50.910047 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.910125 kubelet[2620]: E0911 00:01:50.910055 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.925549 kubelet[2620]: E0911 00:01:50.925524 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.925549 kubelet[2620]: W0911 00:01:50.925543 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.925549 kubelet[2620]: E0911 00:01:50.925557 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.925773 kubelet[2620]: E0911 00:01:50.925757 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.925773 kubelet[2620]: W0911 00:01:50.925773 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.925837 kubelet[2620]: E0911 00:01:50.925781 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.926058 kubelet[2620]: E0911 00:01:50.926036 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.926112 kubelet[2620]: W0911 00:01:50.926058 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.926112 kubelet[2620]: E0911 00:01:50.926088 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.926302 kubelet[2620]: E0911 00:01:50.926291 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.926302 kubelet[2620]: W0911 00:01:50.926302 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.926370 kubelet[2620]: E0911 00:01:50.926311 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.926484 kubelet[2620]: E0911 00:01:50.926466 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.926484 kubelet[2620]: W0911 00:01:50.926477 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.926542 kubelet[2620]: E0911 00:01:50.926487 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.926690 kubelet[2620]: E0911 00:01:50.926679 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.926690 kubelet[2620]: W0911 00:01:50.926689 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.926745 kubelet[2620]: E0911 00:01:50.926697 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.927282 kubelet[2620]: E0911 00:01:50.927255 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.927282 kubelet[2620]: W0911 00:01:50.927282 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.927398 kubelet[2620]: E0911 00:01:50.927304 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.927619 kubelet[2620]: E0911 00:01:50.927605 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.927619 kubelet[2620]: W0911 00:01:50.927618 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.927667 kubelet[2620]: E0911 00:01:50.927627 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.927777 kubelet[2620]: E0911 00:01:50.927764 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.927802 kubelet[2620]: W0911 00:01:50.927777 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.927802 kubelet[2620]: E0911 00:01:50.927785 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.927913 kubelet[2620]: E0911 00:01:50.927894 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.927913 kubelet[2620]: W0911 00:01:50.927905 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.927971 kubelet[2620]: E0911 00:01:50.927921 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.928096 kubelet[2620]: E0911 00:01:50.928082 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.928096 kubelet[2620]: W0911 00:01:50.928094 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.928159 kubelet[2620]: E0911 00:01:50.928103 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.929193 kubelet[2620]: E0911 00:01:50.929173 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.929229 kubelet[2620]: W0911 00:01:50.929193 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.929229 kubelet[2620]: E0911 00:01:50.929207 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.930851 kubelet[2620]: E0911 00:01:50.930833 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.930851 kubelet[2620]: W0911 00:01:50.930849 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.931735 kubelet[2620]: E0911 00:01:50.930862 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.931735 kubelet[2620]: E0911 00:01:50.931096 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.931735 kubelet[2620]: W0911 00:01:50.931106 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.931735 kubelet[2620]: E0911 00:01:50.931116 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.931735 kubelet[2620]: E0911 00:01:50.931338 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.931735 kubelet[2620]: W0911 00:01:50.931356 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.931735 kubelet[2620]: E0911 00:01:50.931370 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.931971 kubelet[2620]: E0911 00:01:50.931780 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.931971 kubelet[2620]: W0911 00:01:50.931791 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.931971 kubelet[2620]: E0911 00:01:50.931801 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.932038 kubelet[2620]: E0911 00:01:50.932007 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.932038 kubelet[2620]: W0911 00:01:50.932016 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.932038 kubelet[2620]: E0911 00:01:50.932025 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:50.932493 kubelet[2620]: E0911 00:01:50.932478 2620 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 11 00:01:50.932493 kubelet[2620]: W0911 00:01:50.932492 2620 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 11 00:01:50.932553 kubelet[2620]: E0911 00:01:50.932504 2620 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 11 00:01:51.024277 containerd[1489]: time="2025-09-11T00:01:51.023651829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:51.024277 containerd[1489]: time="2025-09-11T00:01:51.024057607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 11 00:01:51.024889 containerd[1489]: time="2025-09-11T00:01:51.024862404Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:51.026575 containerd[1489]: time="2025-09-11T00:01:51.026544080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:51.027181 containerd[1489]: time="2025-09-11T00:01:51.027158588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.379525189s" Sep 11 00:01:51.027217 containerd[1489]: time="2025-09-11T00:01:51.027186629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 11 00:01:51.031813 containerd[1489]: time="2025-09-11T00:01:51.031781678Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 11 00:01:51.038382 containerd[1489]: time="2025-09-11T00:01:51.038071763Z" level=info msg="Container 7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:51.045514 containerd[1489]: time="2025-09-11T00:01:51.045422536Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\"" Sep 11 00:01:51.046206 containerd[1489]: time="2025-09-11T00:01:51.046179651Z" level=info msg="StartContainer for \"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\"" Sep 11 00:01:51.047508 containerd[1489]: time="2025-09-11T00:01:51.047484310Z" level=info msg="connecting to shim 7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e" address="unix:///run/containerd/s/f62426b488ebbddf386fd4da75624a08a4f38fa90750a70cdb70f8bac024f135" protocol=ttrpc version=3 Sep 11 00:01:51.073498 systemd[1]: Started cri-containerd-7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e.scope - libcontainer container 7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e. Sep 11 00:01:51.106609 containerd[1489]: time="2025-09-11T00:01:51.106366981Z" level=info msg="StartContainer for \"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\" returns successfully" Sep 11 00:01:51.120925 systemd[1]: cri-containerd-7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e.scope: Deactivated successfully. Sep 11 00:01:51.163119 containerd[1489]: time="2025-09-11T00:01:51.163064953Z" level=info msg="received exit event container_id:\"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\" id:\"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\" pid:3360 exited_at:{seconds:1757548911 nanos:135960404}" Sep 11 00:01:51.163250 containerd[1489]: time="2025-09-11T00:01:51.163162398Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\" id:\"7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e\" pid:3360 exited_at:{seconds:1757548911 nanos:135960404}" Sep 11 00:01:51.215771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e699f3e039b9d5ed666e2e8cb7e16ac3ba0cfdccae1d76494fb33aa59da6e9e-rootfs.mount: Deactivated successfully. Sep 11 00:01:51.764088 kubelet[2620]: E0911 00:01:51.764030 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9nz7" podUID="3328d16f-81a3-4c76-9945-1acaf3146893" Sep 11 00:01:51.840168 containerd[1489]: time="2025-09-11T00:01:51.840125949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 11 00:01:53.763628 kubelet[2620]: E0911 00:01:53.763561 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c9nz7" podUID="3328d16f-81a3-4c76-9945-1acaf3146893" Sep 11 00:01:54.766260 containerd[1489]: time="2025-09-11T00:01:54.766207729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:54.767303 containerd[1489]: time="2025-09-11T00:01:54.767273011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 11 00:01:54.768167 containerd[1489]: time="2025-09-11T00:01:54.768142486Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:54.769994 containerd[1489]: time="2025-09-11T00:01:54.769944758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:01:54.771000 containerd[1489]: time="2025-09-11T00:01:54.770667507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.930500277s" Sep 11 00:01:54.771000 containerd[1489]: time="2025-09-11T00:01:54.770705669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 11 00:01:54.774812 containerd[1489]: time="2025-09-11T00:01:54.774751190Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 11 00:01:54.781923 containerd[1489]: time="2025-09-11T00:01:54.781872555Z" level=info msg="Container b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:01:54.791273 containerd[1489]: time="2025-09-11T00:01:54.791145926Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\"" Sep 11 00:01:54.791929 containerd[1489]: time="2025-09-11T00:01:54.791896756Z" level=info msg="StartContainer for \"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\"" Sep 11 00:01:54.793610 containerd[1489]: time="2025-09-11T00:01:54.793565743Z" level=info msg="connecting to shim b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98" address="unix:///run/containerd/s/f62426b488ebbddf386fd4da75624a08a4f38fa90750a70cdb70f8bac024f135" protocol=ttrpc version=3 Sep 11 00:01:54.818546 systemd[1]: Started cri-containerd-b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98.scope - libcontainer container b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98. Sep 11 00:01:54.855566 containerd[1489]: time="2025-09-11T00:01:54.855522220Z" level=info msg="StartContainer for \"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\" returns successfully" Sep 11 00:01:55.422933 containerd[1489]: time="2025-09-11T00:01:55.422855843Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 00:01:55.425945 systemd[1]: cri-containerd-b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98.scope: Deactivated successfully. Sep 11 00:01:55.426244 systemd[1]: cri-containerd-b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98.scope: Consumed 494ms CPU time, 176.4M memory peak, 2.3M read from disk, 165.8M written to disk. Sep 11 00:01:55.431550 containerd[1489]: time="2025-09-11T00:01:55.431484695Z" level=info msg="received exit event container_id:\"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\" id:\"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\" pid:3420 exited_at:{seconds:1757548915 nanos:431219324}" Sep 11 00:01:55.431718 containerd[1489]: time="2025-09-11T00:01:55.431626620Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\" id:\"b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98\" pid:3420 exited_at:{seconds:1757548915 nanos:431219324}" Sep 11 00:01:55.450921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b96beb3504c2c1789b1ff5f2b19229397318a360d20926af3b4f988ba80f3e98-rootfs.mount: Deactivated successfully. Sep 11 00:01:55.526666 kubelet[2620]: I0911 00:01:55.526624 2620 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 00:01:55.603202 systemd[1]: Created slice kubepods-besteffort-podc95fb5c4_34dc_4005_8a7c_cc82b8b444ba.slice - libcontainer container kubepods-besteffort-podc95fb5c4_34dc_4005_8a7c_cc82b8b444ba.slice. Sep 11 00:01:55.610809 systemd[1]: Created slice kubepods-burstable-pod7a6a2490_ae7e_4043_896c_7b7c2a641add.slice - libcontainer container kubepods-burstable-pod7a6a2490_ae7e_4043_896c_7b7c2a641add.slice. Sep 11 00:01:55.616056 systemd[1]: Created slice kubepods-burstable-pod40a8c81e_f60b_4872_94e9_5d392a0fc45a.slice - libcontainer container kubepods-burstable-pod40a8c81e_f60b_4872_94e9_5d392a0fc45a.slice. Sep 11 00:01:55.622626 systemd[1]: Created slice kubepods-besteffort-podbbff6af1_093f_4daa_9016_0543d9ff1727.slice - libcontainer container kubepods-besteffort-podbbff6af1_093f_4daa_9016_0543d9ff1727.slice. Sep 11 00:01:55.635581 systemd[1]: Created slice kubepods-besteffort-podc82340d4_e9a7_4928_9cd0_8803dc6c20f0.slice - libcontainer container kubepods-besteffort-podc82340d4_e9a7_4928_9cd0_8803dc6c20f0.slice. Sep 11 00:01:55.642919 systemd[1]: Created slice kubepods-besteffort-podf383ef0f_3532_4ba3_ae25_9cf70a1b5786.slice - libcontainer container kubepods-besteffort-podf383ef0f_3532_4ba3_ae25_9cf70a1b5786.slice. Sep 11 00:01:55.649692 systemd[1]: Created slice kubepods-besteffort-pod0bd1b13b_4632_4dc3_aa62_3c5ffa8e3394.slice - libcontainer container kubepods-besteffort-pod0bd1b13b_4632_4dc3_aa62_3c5ffa8e3394.slice. Sep 11 00:01:55.661121 kubelet[2620]: I0911 00:01:55.661063 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394-tigera-ca-bundle\") pod \"calico-kube-controllers-76bf8dddf-rcn6l\" (UID: \"0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394\") " pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" Sep 11 00:01:55.661121 kubelet[2620]: I0911 00:01:55.661120 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg5ml\" (UniqueName: \"kubernetes.io/projected/0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394-kube-api-access-vg5ml\") pod \"calico-kube-controllers-76bf8dddf-rcn6l\" (UID: \"0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394\") " pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" Sep 11 00:01:55.661895 kubelet[2620]: I0911 00:01:55.661145 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6a2490-ae7e-4043-896c-7b7c2a641add-config-volume\") pod \"coredns-674b8bbfcf-5h7mv\" (UID: \"7a6a2490-ae7e-4043-896c-7b7c2a641add\") " pod="kube-system/coredns-674b8bbfcf-5h7mv" Sep 11 00:01:55.661895 kubelet[2620]: I0911 00:01:55.661163 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f383ef0f-3532-4ba3-ae25-9cf70a1b5786-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-skx4x\" (UID: \"f383ef0f-3532-4ba3-ae25-9cf70a1b5786\") " pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:55.661895 kubelet[2620]: I0911 00:01:55.661179 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8q96\" (UniqueName: \"kubernetes.io/projected/f383ef0f-3532-4ba3-ae25-9cf70a1b5786-kube-api-access-k8q96\") pod \"goldmane-54d579b49d-skx4x\" (UID: \"f383ef0f-3532-4ba3-ae25-9cf70a1b5786\") " pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:55.661895 kubelet[2620]: I0911 00:01:55.661196 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f383ef0f-3532-4ba3-ae25-9cf70a1b5786-goldmane-key-pair\") pod \"goldmane-54d579b49d-skx4x\" (UID: \"f383ef0f-3532-4ba3-ae25-9cf70a1b5786\") " pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:55.661895 kubelet[2620]: I0911 00:01:55.661217 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb876\" (UniqueName: \"kubernetes.io/projected/7a6a2490-ae7e-4043-896c-7b7c2a641add-kube-api-access-sb876\") pod \"coredns-674b8bbfcf-5h7mv\" (UID: \"7a6a2490-ae7e-4043-896c-7b7c2a641add\") " pod="kube-system/coredns-674b8bbfcf-5h7mv" Sep 11 00:01:55.662015 kubelet[2620]: I0911 00:01:55.661232 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-backend-key-pair\") pod \"whisker-6cdb6bbdc9-vmxss\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " pod="calico-system/whisker-6cdb6bbdc9-vmxss" Sep 11 00:01:55.662015 kubelet[2620]: I0911 00:01:55.661248 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-ca-bundle\") pod \"whisker-6cdb6bbdc9-vmxss\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " pod="calico-system/whisker-6cdb6bbdc9-vmxss" Sep 11 00:01:55.662015 kubelet[2620]: I0911 00:01:55.661264 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlhjk\" (UniqueName: \"kubernetes.io/projected/c95fb5c4-34dc-4005-8a7c-cc82b8b444ba-kube-api-access-dlhjk\") pod \"calico-apiserver-7c8f858dd5-kfnlz\" (UID: \"c95fb5c4-34dc-4005-8a7c-cc82b8b444ba\") " pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" Sep 11 00:01:55.662015 kubelet[2620]: I0911 00:01:55.661281 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wvr\" (UniqueName: \"kubernetes.io/projected/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-kube-api-access-j9wvr\") pod \"whisker-6cdb6bbdc9-vmxss\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " pod="calico-system/whisker-6cdb6bbdc9-vmxss" Sep 11 00:01:55.662015 kubelet[2620]: I0911 00:01:55.661297 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40a8c81e-f60b-4872-94e9-5d392a0fc45a-config-volume\") pod \"coredns-674b8bbfcf-nr8lq\" (UID: \"40a8c81e-f60b-4872-94e9-5d392a0fc45a\") " pod="kube-system/coredns-674b8bbfcf-nr8lq" Sep 11 00:01:55.662133 kubelet[2620]: I0911 00:01:55.661314 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f383ef0f-3532-4ba3-ae25-9cf70a1b5786-config\") pod \"goldmane-54d579b49d-skx4x\" (UID: \"f383ef0f-3532-4ba3-ae25-9cf70a1b5786\") " pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:55.662133 kubelet[2620]: I0911 00:01:55.661333 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c95fb5c4-34dc-4005-8a7c-cc82b8b444ba-calico-apiserver-certs\") pod \"calico-apiserver-7c8f858dd5-kfnlz\" (UID: \"c95fb5c4-34dc-4005-8a7c-cc82b8b444ba\") " pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" Sep 11 00:01:55.662133 kubelet[2620]: I0911 00:01:55.661362 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bbff6af1-093f-4daa-9016-0543d9ff1727-calico-apiserver-certs\") pod \"calico-apiserver-7c8f858dd5-sqh2g\" (UID: \"bbff6af1-093f-4daa-9016-0543d9ff1727\") " pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" Sep 11 00:01:55.662133 kubelet[2620]: I0911 00:01:55.661538 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc6vp\" (UniqueName: \"kubernetes.io/projected/40a8c81e-f60b-4872-94e9-5d392a0fc45a-kube-api-access-xc6vp\") pod \"coredns-674b8bbfcf-nr8lq\" (UID: \"40a8c81e-f60b-4872-94e9-5d392a0fc45a\") " pod="kube-system/coredns-674b8bbfcf-nr8lq" Sep 11 00:01:55.662133 kubelet[2620]: I0911 00:01:55.661577 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rm5m\" (UniqueName: \"kubernetes.io/projected/bbff6af1-093f-4daa-9016-0543d9ff1727-kube-api-access-7rm5m\") pod \"calico-apiserver-7c8f858dd5-sqh2g\" (UID: \"bbff6af1-093f-4daa-9016-0543d9ff1727\") " pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" Sep 11 00:01:55.808856 systemd[1]: Created slice kubepods-besteffort-pod3328d16f_81a3_4c76_9945_1acaf3146893.slice - libcontainer container kubepods-besteffort-pod3328d16f_81a3_4c76_9945_1acaf3146893.slice. Sep 11 00:01:55.811945 containerd[1489]: time="2025-09-11T00:01:55.811897308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9nz7,Uid:3328d16f-81a3-4c76-9945-1acaf3146893,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:55.856251 containerd[1489]: time="2025-09-11T00:01:55.856175009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 11 00:01:55.902758 containerd[1489]: time="2025-09-11T00:01:55.902705596Z" level=error msg="Failed to destroy network for sandbox \"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.904481 containerd[1489]: time="2025-09-11T00:01:55.904432062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9nz7,Uid:3328d16f-81a3-4c76-9945-1acaf3146893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.904915 systemd[1]: run-netns-cni\x2d91d50bd5\x2dd327\x2db3dd\x2dbba6\x2d542e3e1c1d48.mount: Deactivated successfully. Sep 11 00:01:55.908232 containerd[1489]: time="2025-09-11T00:01:55.908197327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-kfnlz,Uid:c95fb5c4-34dc-4005-8a7c-cc82b8b444ba,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:01:55.910791 kubelet[2620]: E0911 00:01:55.910719 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.910899 kubelet[2620]: E0911 00:01:55.910825 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:55.910899 kubelet[2620]: E0911 00:01:55.910847 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c9nz7" Sep 11 00:01:55.910963 kubelet[2620]: E0911 00:01:55.910907 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c9nz7_calico-system(3328d16f-81a3-4c76-9945-1acaf3146893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c9nz7_calico-system(3328d16f-81a3-4c76-9945-1acaf3146893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ec77571a14c7559c0179e411a42c9c8821fa7af964aae66f250893661c014d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c9nz7" podUID="3328d16f-81a3-4c76-9945-1acaf3146893" Sep 11 00:01:55.915206 kubelet[2620]: E0911 00:01:55.914616 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:55.915554 containerd[1489]: time="2025-09-11T00:01:55.915502528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7mv,Uid:7a6a2490-ae7e-4043-896c-7b7c2a641add,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:55.919296 kubelet[2620]: E0911 00:01:55.919203 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:01:55.920081 containerd[1489]: time="2025-09-11T00:01:55.920029461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nr8lq,Uid:40a8c81e-f60b-4872-94e9-5d392a0fc45a,Namespace:kube-system,Attempt:0,}" Sep 11 00:01:55.933979 containerd[1489]: time="2025-09-11T00:01:55.933938276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-sqh2g,Uid:bbff6af1-093f-4daa-9016-0543d9ff1727,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:01:55.940129 containerd[1489]: time="2025-09-11T00:01:55.940071151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdb6bbdc9-vmxss,Uid:c82340d4-e9a7-4928-9cd0-8803dc6c20f0,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:55.949641 containerd[1489]: time="2025-09-11T00:01:55.949600917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-skx4x,Uid:f383ef0f-3532-4ba3-ae25-9cf70a1b5786,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:55.954023 containerd[1489]: time="2025-09-11T00:01:55.953975245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bf8dddf-rcn6l,Uid:0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394,Namespace:calico-system,Attempt:0,}" Sep 11 00:01:55.984799 containerd[1489]: time="2025-09-11T00:01:55.984750908Z" level=error msg="Failed to destroy network for sandbox \"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.986512 containerd[1489]: time="2025-09-11T00:01:55.986315488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-kfnlz,Uid:c95fb5c4-34dc-4005-8a7c-cc82b8b444ba,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.986903 kubelet[2620]: E0911 00:01:55.986854 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:55.986968 kubelet[2620]: E0911 00:01:55.986921 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" Sep 11 00:01:55.986968 kubelet[2620]: E0911 00:01:55.986942 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" Sep 11 00:01:55.987031 kubelet[2620]: E0911 00:01:55.986994 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8f858dd5-kfnlz_calico-apiserver(c95fb5c4-34dc-4005-8a7c-cc82b8b444ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8f858dd5-kfnlz_calico-apiserver(c95fb5c4-34dc-4005-8a7c-cc82b8b444ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64425c48bdfc5cbec6f2888af439a63e50e3dd14a42bab18b0e0df8f9ecaf0d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" podUID="c95fb5c4-34dc-4005-8a7c-cc82b8b444ba" Sep 11 00:01:56.037999 containerd[1489]: time="2025-09-11T00:01:56.037878175Z" level=error msg="Failed to destroy network for sandbox \"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.039069 containerd[1489]: time="2025-09-11T00:01:56.039027537Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nr8lq,Uid:40a8c81e-f60b-4872-94e9-5d392a0fc45a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.039604 kubelet[2620]: E0911 00:01:56.039561 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.040055 containerd[1489]: time="2025-09-11T00:01:56.040020734Z" level=error msg="Failed to destroy network for sandbox \"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.040299 containerd[1489]: time="2025-09-11T00:01:56.040263183Z" level=error msg="Failed to destroy network for sandbox \"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.040703 kubelet[2620]: E0911 00:01:56.039630 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nr8lq" Sep 11 00:01:56.040752 kubelet[2620]: E0911 00:01:56.040711 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nr8lq" Sep 11 00:01:56.040925 kubelet[2620]: E0911 00:01:56.040896 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nr8lq_kube-system(40a8c81e-f60b-4872-94e9-5d392a0fc45a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nr8lq_kube-system(40a8c81e-f60b-4872-94e9-5d392a0fc45a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d901a8813e3e8a55c615c9afed1f17d86060c9ad39134ba1e9c837f7c27d2ca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nr8lq" podUID="40a8c81e-f60b-4872-94e9-5d392a0fc45a" Sep 11 00:01:56.041280 containerd[1489]: time="2025-09-11T00:01:56.041242299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-sqh2g,Uid:bbff6af1-093f-4daa-9016-0543d9ff1727,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.041895 containerd[1489]: time="2025-09-11T00:01:56.041865402Z" level=error msg="Failed to destroy network for sandbox \"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.042533 containerd[1489]: time="2025-09-11T00:01:56.042497305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdb6bbdc9-vmxss,Uid:c82340d4-e9a7-4928-9cd0-8803dc6c20f0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.042754 kubelet[2620]: E0911 00:01:56.042718 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.042800 kubelet[2620]: E0911 00:01:56.042777 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" Sep 11 00:01:56.042824 kubelet[2620]: E0911 00:01:56.042798 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" Sep 11 00:01:56.042983 kubelet[2620]: E0911 00:01:56.042864 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7c8f858dd5-sqh2g_calico-apiserver(bbff6af1-093f-4daa-9016-0543d9ff1727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7c8f858dd5-sqh2g_calico-apiserver(bbff6af1-093f-4daa-9016-0543d9ff1727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8932429f4a6dc1f9f44a3f67c7d487ce266d39ef13ba26410c62e4a3e6f4417b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" podUID="bbff6af1-093f-4daa-9016-0543d9ff1727" Sep 11 00:01:56.043038 kubelet[2620]: E0911 00:01:56.042946 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.043038 kubelet[2620]: E0911 00:01:56.043012 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdb6bbdc9-vmxss" Sep 11 00:01:56.043038 kubelet[2620]: E0911 00:01:56.043026 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdb6bbdc9-vmxss" Sep 11 00:01:56.043121 kubelet[2620]: E0911 00:01:56.043051 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cdb6bbdc9-vmxss_calico-system(c82340d4-e9a7-4928-9cd0-8803dc6c20f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cdb6bbdc9-vmxss_calico-system(c82340d4-e9a7-4928-9cd0-8803dc6c20f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"843485b1e09da0f17ed7f6d48cc520d42cdd0d4f61a026685a5c3d5638bc03fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cdb6bbdc9-vmxss" podUID="c82340d4-e9a7-4928-9cd0-8803dc6c20f0" Sep 11 00:01:56.044193 containerd[1489]: time="2025-09-11T00:01:56.044075324Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7mv,Uid:7a6a2490-ae7e-4043-896c-7b7c2a641add,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.044596 kubelet[2620]: E0911 00:01:56.044523 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.044658 kubelet[2620]: E0911 00:01:56.044614 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5h7mv" Sep 11 00:01:56.044965 kubelet[2620]: E0911 00:01:56.044921 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-5h7mv" Sep 11 00:01:56.045027 kubelet[2620]: E0911 00:01:56.044999 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-5h7mv_kube-system(7a6a2490-ae7e-4043-896c-7b7c2a641add)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-5h7mv_kube-system(7a6a2490-ae7e-4043-896c-7b7c2a641add)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e89213da40f2816306fd4dc00944edfab92dca4aa9d4ccbe5b934c44560a5a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-5h7mv" podUID="7a6a2490-ae7e-4043-896c-7b7c2a641add" Sep 11 00:01:56.071438 containerd[1489]: time="2025-09-11T00:01:56.070997318Z" level=error msg="Failed to destroy network for sandbox \"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.072711 containerd[1489]: time="2025-09-11T00:01:56.072665260Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bf8dddf-rcn6l,Uid:0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.073262 kubelet[2620]: E0911 00:01:56.073192 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.073373 kubelet[2620]: E0911 00:01:56.073280 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" Sep 11 00:01:56.073373 kubelet[2620]: E0911 00:01:56.073317 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" Sep 11 00:01:56.073465 kubelet[2620]: E0911 00:01:56.073434 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76bf8dddf-rcn6l_calico-system(0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76bf8dddf-rcn6l_calico-system(0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbf70d43c18aa30ca1258634917081f8d035228c1245cac47cf39c452b09d135\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" podUID="0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394" Sep 11 00:01:56.073591 containerd[1489]: time="2025-09-11T00:01:56.073247201Z" level=error msg="Failed to destroy network for sandbox \"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.075037 containerd[1489]: time="2025-09-11T00:01:56.074935383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-skx4x,Uid:f383ef0f-3532-4ba3-ae25-9cf70a1b5786,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.075206 kubelet[2620]: E0911 00:01:56.075144 2620 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 11 00:01:56.075247 kubelet[2620]: E0911 00:01:56.075221 2620 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:56.075292 kubelet[2620]: E0911 00:01:56.075254 2620 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-skx4x" Sep 11 00:01:56.075371 kubelet[2620]: E0911 00:01:56.075330 2620 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-skx4x_calico-system(f383ef0f-3532-4ba3-ae25-9cf70a1b5786)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-skx4x_calico-system(f383ef0f-3532-4ba3-ae25-9cf70a1b5786)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8107b12359a3f828a9d4d50a35a1c4f9b104f7a50e643ff40c7e1004cc5ac41b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-skx4x" podUID="f383ef0f-3532-4ba3-ae25-9cf70a1b5786" Sep 11 00:02:00.013115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1872780294.mount: Deactivated successfully. Sep 11 00:02:00.264512 containerd[1489]: time="2025-09-11T00:02:00.264393175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:00.265466 containerd[1489]: time="2025-09-11T00:02:00.265430168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 11 00:02:00.266438 containerd[1489]: time="2025-09-11T00:02:00.266404159Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:00.268293 containerd[1489]: time="2025-09-11T00:02:00.268262019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:00.269087 containerd[1489]: time="2025-09-11T00:02:00.269047404Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.412797193s" Sep 11 00:02:00.269087 containerd[1489]: time="2025-09-11T00:02:00.269081885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 11 00:02:00.288936 containerd[1489]: time="2025-09-11T00:02:00.288889877Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 11 00:02:00.325562 containerd[1489]: time="2025-09-11T00:02:00.325517645Z" level=info msg="Container 09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:00.333816 containerd[1489]: time="2025-09-11T00:02:00.333768509Z" level=info msg="CreateContainer within sandbox \"689a928f0fb0a6999d1e871ec18802351fd4c000021d92dfd9bd2a3ba1e197bd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\"" Sep 11 00:02:00.334402 containerd[1489]: time="2025-09-11T00:02:00.334377088Z" level=info msg="StartContainer for \"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\"" Sep 11 00:02:00.335886 containerd[1489]: time="2025-09-11T00:02:00.335845895Z" level=info msg="connecting to shim 09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9" address="unix:///run/containerd/s/f62426b488ebbddf386fd4da75624a08a4f38fa90750a70cdb70f8bac024f135" protocol=ttrpc version=3 Sep 11 00:02:00.364506 systemd[1]: Started cri-containerd-09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9.scope - libcontainer container 09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9. Sep 11 00:02:00.399610 containerd[1489]: time="2025-09-11T00:02:00.399575448Z" level=info msg="StartContainer for \"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\" returns successfully" Sep 11 00:02:00.514171 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 11 00:02:00.514276 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 11 00:02:00.697596 kubelet[2620]: I0911 00:02:00.697560 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-ca-bundle\") pod \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " Sep 11 00:02:00.697969 kubelet[2620]: I0911 00:02:00.697645 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wvr\" (UniqueName: \"kubernetes.io/projected/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-kube-api-access-j9wvr\") pod \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " Sep 11 00:02:00.697969 kubelet[2620]: I0911 00:02:00.697720 2620 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-backend-key-pair\") pod \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\" (UID: \"c82340d4-e9a7-4928-9cd0-8803dc6c20f0\") " Sep 11 00:02:00.711665 kubelet[2620]: I0911 00:02:00.711589 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c82340d4-e9a7-4928-9cd0-8803dc6c20f0" (UID: "c82340d4-e9a7-4928-9cd0-8803dc6c20f0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 00:02:00.712277 kubelet[2620]: I0911 00:02:00.712017 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-kube-api-access-j9wvr" (OuterVolumeSpecName: "kube-api-access-j9wvr") pod "c82340d4-e9a7-4928-9cd0-8803dc6c20f0" (UID: "c82340d4-e9a7-4928-9cd0-8803dc6c20f0"). InnerVolumeSpecName "kube-api-access-j9wvr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 00:02:00.712493 kubelet[2620]: I0911 00:02:00.712462 2620 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c82340d4-e9a7-4928-9cd0-8803dc6c20f0" (UID: "c82340d4-e9a7-4928-9cd0-8803dc6c20f0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 00:02:00.778690 systemd[1]: Removed slice kubepods-besteffort-podc82340d4_e9a7_4928_9cd0_8803dc6c20f0.slice - libcontainer container kubepods-besteffort-podc82340d4_e9a7_4928_9cd0_8803dc6c20f0.slice. Sep 11 00:02:00.799381 kubelet[2620]: I0911 00:02:00.798895 2620 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 11 00:02:00.799381 kubelet[2620]: I0911 00:02:00.798926 2620 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j9wvr\" (UniqueName: \"kubernetes.io/projected/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-kube-api-access-j9wvr\") on node \"localhost\" DevicePath \"\"" Sep 11 00:02:00.799381 kubelet[2620]: I0911 00:02:00.798936 2620 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c82340d4-e9a7-4928-9cd0-8803dc6c20f0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 11 00:02:00.889994 kubelet[2620]: I0911 00:02:00.889823 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fb6v6" podStartSLOduration=1.707952126 podStartE2EDuration="13.889806449s" podCreationTimestamp="2025-09-11 00:01:47 +0000 UTC" firstStartedPulling="2025-09-11 00:01:48.089125622 +0000 UTC m=+23.431597655" lastFinishedPulling="2025-09-11 00:02:00.270979905 +0000 UTC m=+35.613451978" observedRunningTime="2025-09-11 00:02:00.888151076 +0000 UTC m=+36.230623189" watchObservedRunningTime="2025-09-11 00:02:00.889806449 +0000 UTC m=+36.232278522" Sep 11 00:02:00.967934 systemd[1]: Created slice kubepods-besteffort-pod0f9820d4_6047_4aec_ab4e_522abd8ae2f3.slice - libcontainer container kubepods-besteffort-pod0f9820d4_6047_4aec_ab4e_522abd8ae2f3.slice. Sep 11 00:02:01.000691 kubelet[2620]: I0911 00:02:01.000645 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f9820d4-6047-4aec-ab4e-522abd8ae2f3-whisker-ca-bundle\") pod \"whisker-6d86b776bb-x8c49\" (UID: \"0f9820d4-6047-4aec-ab4e-522abd8ae2f3\") " pod="calico-system/whisker-6d86b776bb-x8c49" Sep 11 00:02:01.000826 kubelet[2620]: I0911 00:02:01.000733 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phsxl\" (UniqueName: \"kubernetes.io/projected/0f9820d4-6047-4aec-ab4e-522abd8ae2f3-kube-api-access-phsxl\") pod \"whisker-6d86b776bb-x8c49\" (UID: \"0f9820d4-6047-4aec-ab4e-522abd8ae2f3\") " pod="calico-system/whisker-6d86b776bb-x8c49" Sep 11 00:02:01.000826 kubelet[2620]: I0911 00:02:01.000774 2620 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f9820d4-6047-4aec-ab4e-522abd8ae2f3-whisker-backend-key-pair\") pod \"whisker-6d86b776bb-x8c49\" (UID: \"0f9820d4-6047-4aec-ab4e-522abd8ae2f3\") " pod="calico-system/whisker-6d86b776bb-x8c49" Sep 11 00:02:01.014091 systemd[1]: var-lib-kubelet-pods-c82340d4\x2de9a7\x2d4928\x2d9cd0\x2d8803dc6c20f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9wvr.mount: Deactivated successfully. Sep 11 00:02:01.014180 systemd[1]: var-lib-kubelet-pods-c82340d4\x2de9a7\x2d4928\x2d9cd0\x2d8803dc6c20f0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 11 00:02:01.056378 containerd[1489]: time="2025-09-11T00:02:01.056296222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\" id:\"d03d0aac5e25ba766f884b0ee7be3a046a427315d9cba8d58687d76fe0cb8e98\" pid:3800 exit_status:1 exited_at:{seconds:1757548921 nanos:55941611}" Sep 11 00:02:01.286405 containerd[1489]: time="2025-09-11T00:02:01.286266193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d86b776bb-x8c49,Uid:0f9820d4-6047-4aec-ab4e-522abd8ae2f3,Namespace:calico-system,Attempt:0,}" Sep 11 00:02:01.438220 systemd-networkd[1423]: cali93dae3e4925: Link UP Sep 11 00:02:01.439060 systemd-networkd[1423]: cali93dae3e4925: Gained carrier Sep 11 00:02:01.454410 containerd[1489]: 2025-09-11 00:02:01.307 [INFO][3815] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 11 00:02:01.454410 containerd[1489]: 2025-09-11 00:02:01.336 [INFO][3815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6d86b776bb--x8c49-eth0 whisker-6d86b776bb- calico-system 0f9820d4-6047-4aec-ab4e-522abd8ae2f3 939 0 2025-09-11 00:02:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d86b776bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6d86b776bb-x8c49 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali93dae3e4925 [] [] }} ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-" Sep 11 00:02:01.454410 containerd[1489]: 2025-09-11 00:02:01.336 [INFO][3815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.454410 containerd[1489]: 2025-09-11 00:02:01.393 [INFO][3829] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" HandleID="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Workload="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.393 [INFO][3829] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" HandleID="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Workload="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d660), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6d86b776bb-x8c49", "timestamp":"2025-09-11 00:02:01.393282813 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.393 [INFO][3829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.393 [INFO][3829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.393 [INFO][3829] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.404 [INFO][3829] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" host="localhost" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.409 [INFO][3829] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.413 [INFO][3829] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.415 [INFO][3829] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.417 [INFO][3829] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:01.454719 containerd[1489]: 2025-09-11 00:02:01.417 [INFO][3829] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" host="localhost" Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.418 [INFO][3829] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7 Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.422 [INFO][3829] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" host="localhost" Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.428 [INFO][3829] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" host="localhost" Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.429 [INFO][3829] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" host="localhost" Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.429 [INFO][3829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:01.454909 containerd[1489]: 2025-09-11 00:02:01.429 [INFO][3829] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" HandleID="k8s-pod-network.94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Workload="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.455031 containerd[1489]: 2025-09-11 00:02:01.431 [INFO][3815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d86b776bb--x8c49-eth0", GenerateName:"whisker-6d86b776bb-", Namespace:"calico-system", SelfLink:"", UID:"0f9820d4-6047-4aec-ab4e-522abd8ae2f3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d86b776bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6d86b776bb-x8c49", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93dae3e4925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:01.455031 containerd[1489]: 2025-09-11 00:02:01.431 [INFO][3815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.455095 containerd[1489]: 2025-09-11 00:02:01.431 [INFO][3815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93dae3e4925 ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.455095 containerd[1489]: 2025-09-11 00:02:01.439 [INFO][3815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.455134 containerd[1489]: 2025-09-11 00:02:01.440 [INFO][3815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6d86b776bb--x8c49-eth0", GenerateName:"whisker-6d86b776bb-", Namespace:"calico-system", SelfLink:"", UID:"0f9820d4-6047-4aec-ab4e-522abd8ae2f3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 2, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d86b776bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7", Pod:"whisker-6d86b776bb-x8c49", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93dae3e4925", MAC:"3e:f0:0e:64:89:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:01.455182 containerd[1489]: 2025-09-11 00:02:01.452 [INFO][3815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" Namespace="calico-system" Pod="whisker-6d86b776bb-x8c49" WorkloadEndpoint="localhost-k8s-whisker--6d86b776bb--x8c49-eth0" Sep 11 00:02:01.500076 containerd[1489]: time="2025-09-11T00:02:01.500022425Z" level=info msg="connecting to shim 94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7" address="unix:///run/containerd/s/74ebca6fcc22b8f37218d4f1ac40c2928e40623e0f35426d986408821e5343e4" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:01.533555 systemd[1]: Started cri-containerd-94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7.scope - libcontainer container 94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7. Sep 11 00:02:01.545476 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:01.565841 containerd[1489]: time="2025-09-11T00:02:01.565805813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d86b776bb-x8c49,Uid:0f9820d4-6047-4aec-ab4e-522abd8ae2f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7\"" Sep 11 00:02:01.567234 containerd[1489]: time="2025-09-11T00:02:01.567210456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 11 00:02:01.987817 containerd[1489]: time="2025-09-11T00:02:01.987781345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\" id:\"d3febdf27a2cbe00580c3577bce4b9bde0cc42a63371b0d7dfef4112dffae449\" pid:3998 exit_status:1 exited_at:{seconds:1757548921 nanos:987499936}" Sep 11 00:02:02.765849 kubelet[2620]: I0911 00:02:02.765800 2620 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c82340d4-e9a7-4928-9cd0-8803dc6c20f0" path="/var/lib/kubelet/pods/c82340d4-e9a7-4928-9cd0-8803dc6c20f0/volumes" Sep 11 00:02:02.849848 containerd[1489]: time="2025-09-11T00:02:02.849797877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:02.850745 containerd[1489]: time="2025-09-11T00:02:02.850571020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 11 00:02:02.851444 containerd[1489]: time="2025-09-11T00:02:02.851412485Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:02.860509 containerd[1489]: time="2025-09-11T00:02:02.860462635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:02.861388 containerd[1489]: time="2025-09-11T00:02:02.861329261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.29399952s" Sep 11 00:02:02.861585 containerd[1489]: time="2025-09-11T00:02:02.861478225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 11 00:02:02.865809 containerd[1489]: time="2025-09-11T00:02:02.865774193Z" level=info msg="CreateContainer within sandbox \"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 11 00:02:02.874670 containerd[1489]: time="2025-09-11T00:02:02.874625137Z" level=info msg="Container efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:02.883097 containerd[1489]: time="2025-09-11T00:02:02.883053909Z" level=info msg="CreateContainer within sandbox \"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06\"" Sep 11 00:02:02.883980 containerd[1489]: time="2025-09-11T00:02:02.883774770Z" level=info msg="StartContainer for \"efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06\"" Sep 11 00:02:02.885240 containerd[1489]: time="2025-09-11T00:02:02.885212333Z" level=info msg="connecting to shim efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06" address="unix:///run/containerd/s/74ebca6fcc22b8f37218d4f1ac40c2928e40623e0f35426d986408821e5343e4" protocol=ttrpc version=3 Sep 11 00:02:02.904141 systemd[1]: Started cri-containerd-efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06.scope - libcontainer container efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06. Sep 11 00:02:02.952824 containerd[1489]: time="2025-09-11T00:02:02.952723587Z" level=info msg="StartContainer for \"efa8253c44fa2a2e6a762eafe2800a9969bef54aa1b2b2520f9baaad1a3f4c06\" returns successfully" Sep 11 00:02:02.956068 containerd[1489]: time="2025-09-11T00:02:02.956037326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 11 00:02:02.976292 containerd[1489]: time="2025-09-11T00:02:02.976245569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\" id:\"c9371dff46b583befdca3304f8460a5f7ea629ca54461f828eb8be0d447efd38\" pid:4039 exit_status:1 exited_at:{seconds:1757548922 nanos:975691433}" Sep 11 00:02:03.195506 systemd-networkd[1423]: cali93dae3e4925: Gained IPv6LL Sep 11 00:02:04.763995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2733794351.mount: Deactivated successfully. Sep 11 00:02:04.826280 containerd[1489]: time="2025-09-11T00:02:04.826224643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:04.826799 containerd[1489]: time="2025-09-11T00:02:04.826765338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 11 00:02:04.827647 containerd[1489]: time="2025-09-11T00:02:04.827557801Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:04.829388 containerd[1489]: time="2025-09-11T00:02:04.829318210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:04.830042 containerd[1489]: time="2025-09-11T00:02:04.830013829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.873800498s" Sep 11 00:02:04.830116 containerd[1489]: time="2025-09-11T00:02:04.830047350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 11 00:02:04.834792 containerd[1489]: time="2025-09-11T00:02:04.834758882Z" level=info msg="CreateContainer within sandbox \"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 11 00:02:04.841124 containerd[1489]: time="2025-09-11T00:02:04.841084460Z" level=info msg="Container 3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:04.850419 containerd[1489]: time="2025-09-11T00:02:04.850292597Z" level=info msg="CreateContainer within sandbox \"94baca7c442b4381ab7bd01459c3ca6f7554484c68f1c16892176d5ce81e6fd7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e\"" Sep 11 00:02:04.851124 containerd[1489]: time="2025-09-11T00:02:04.851103380Z" level=info msg="StartContainer for \"3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e\"" Sep 11 00:02:04.852329 containerd[1489]: time="2025-09-11T00:02:04.852303054Z" level=info msg="connecting to shim 3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e" address="unix:///run/containerd/s/74ebca6fcc22b8f37218d4f1ac40c2928e40623e0f35426d986408821e5343e4" protocol=ttrpc version=3 Sep 11 00:02:04.873523 systemd[1]: Started cri-containerd-3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e.scope - libcontainer container 3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e. Sep 11 00:02:04.910282 containerd[1489]: time="2025-09-11T00:02:04.910243677Z" level=info msg="StartContainer for \"3b7e23d3ed3ce021b9a0fa78a07ed2b2ce182d22ac213a769dac505fc014ff0e\" returns successfully" Sep 11 00:02:05.780049 kubelet[2620]: I0911 00:02:05.780003 2620 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 00:02:05.780707 kubelet[2620]: E0911 00:02:05.780289 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:05.890354 kubelet[2620]: E0911 00:02:05.890219 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:06.360063 systemd-networkd[1423]: vxlan.calico: Link UP Sep 11 00:02:06.360072 systemd-networkd[1423]: vxlan.calico: Gained carrier Sep 11 00:02:06.445448 systemd[1]: Started sshd@7-10.0.0.103:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Sep 11 00:02:06.510278 sshd[4299]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:06.511652 sshd-session[4299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:06.515583 systemd-logind[1467]: New session 8 of user core. Sep 11 00:02:06.523553 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 00:02:06.731202 sshd[4309]: Connection closed by 10.0.0.1 port 40642 Sep 11 00:02:06.731536 sshd-session[4299]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:06.735147 systemd[1]: sshd@7-10.0.0.103:22-10.0.0.1:40642.service: Deactivated successfully. Sep 11 00:02:06.737931 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 00:02:06.738637 systemd-logind[1467]: Session 8 logged out. Waiting for processes to exit. Sep 11 00:02:06.740336 systemd-logind[1467]: Removed session 8. Sep 11 00:02:07.611500 systemd-networkd[1423]: vxlan.calico: Gained IPv6LL Sep 11 00:02:07.763886 kubelet[2620]: E0911 00:02:07.763854 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:07.764399 containerd[1489]: time="2025-09-11T00:02:07.764318276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bf8dddf-rcn6l,Uid:0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394,Namespace:calico-system,Attempt:0,}" Sep 11 00:02:07.765046 containerd[1489]: time="2025-09-11T00:02:07.764999294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nr8lq,Uid:40a8c81e-f60b-4872-94e9-5d392a0fc45a,Namespace:kube-system,Attempt:0,}" Sep 11 00:02:07.975635 systemd-networkd[1423]: caliee9dac82ca7: Link UP Sep 11 00:02:07.975981 systemd-networkd[1423]: caliee9dac82ca7: Gained carrier Sep 11 00:02:07.989816 kubelet[2620]: I0911 00:02:07.989753 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6d86b776bb-x8c49" podStartSLOduration=4.725961666 podStartE2EDuration="7.989733107s" podCreationTimestamp="2025-09-11 00:02:00 +0000 UTC" firstStartedPulling="2025-09-11 00:02:01.56701789 +0000 UTC m=+36.909489963" lastFinishedPulling="2025-09-11 00:02:04.830789331 +0000 UTC m=+40.173261404" observedRunningTime="2025-09-11 00:02:05.911909508 +0000 UTC m=+41.254381581" watchObservedRunningTime="2025-09-11 00:02:07.989733107 +0000 UTC m=+43.332205180" Sep 11 00:02:07.993871 containerd[1489]: 2025-09-11 00:02:07.878 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0 coredns-674b8bbfcf- kube-system 40a8c81e-f60b-4872-94e9-5d392a0fc45a 877 0 2025-09-11 00:01:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-nr8lq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee9dac82ca7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-" Sep 11 00:02:07.993871 containerd[1489]: 2025-09-11 00:02:07.878 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.993871 containerd[1489]: 2025-09-11 00:02:07.910 [INFO][4385] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" HandleID="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Workload="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.910 [INFO][4385] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" HandleID="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Workload="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-nr8lq", "timestamp":"2025-09-11 00:02:07.910800399 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.911 [INFO][4385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.911 [INFO][4385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.911 [INFO][4385] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.923 [INFO][4385] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" host="localhost" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.933 [INFO][4385] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.938 [INFO][4385] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.942 [INFO][4385] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.946 [INFO][4385] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:07.994084 containerd[1489]: 2025-09-11 00:02:07.946 [INFO][4385] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" host="localhost" Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.948 [INFO][4385] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2 Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.956 [INFO][4385] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" host="localhost" Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4385] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" host="localhost" Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4385] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" host="localhost" Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:07.995057 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4385] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" HandleID="k8s-pod-network.3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Workload="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.995190 containerd[1489]: 2025-09-11 00:02:07.973 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"40a8c81e-f60b-4872-94e9-5d392a0fc45a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-nr8lq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee9dac82ca7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:07.995265 containerd[1489]: 2025-09-11 00:02:07.973 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.995265 containerd[1489]: 2025-09-11 00:02:07.973 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee9dac82ca7 ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.995265 containerd[1489]: 2025-09-11 00:02:07.976 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:07.995325 containerd[1489]: 2025-09-11 00:02:07.976 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"40a8c81e-f60b-4872-94e9-5d392a0fc45a", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2", Pod:"coredns-674b8bbfcf-nr8lq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee9dac82ca7", MAC:"92:aa:8a:2f:c7:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:07.995325 containerd[1489]: 2025-09-11 00:02:07.989 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" Namespace="kube-system" Pod="coredns-674b8bbfcf-nr8lq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--nr8lq-eth0" Sep 11 00:02:08.031010 containerd[1489]: time="2025-09-11T00:02:08.030970426Z" level=info msg="connecting to shim 3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2" address="unix:///run/containerd/s/60709c60c1976f991bb1d0f35ff6358bc74e4190e05aa28984424ba66439e15b" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:08.059530 systemd[1]: Started cri-containerd-3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2.scope - libcontainer container 3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2. Sep 11 00:02:08.062364 systemd-networkd[1423]: cali95759dec736: Link UP Sep 11 00:02:08.062839 systemd-networkd[1423]: cali95759dec736: Gained carrier Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.879 [INFO][4352] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0 calico-kube-controllers-76bf8dddf- calico-system 0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394 880 0 2025-09-11 00:01:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76bf8dddf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-76bf8dddf-rcn6l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali95759dec736 [] [] }} ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.879 [INFO][4352] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.912 [INFO][4383] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" HandleID="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Workload="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.912 [INFO][4383] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" HandleID="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Workload="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-76bf8dddf-rcn6l", "timestamp":"2025-09-11 00:02:07.912455522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.912 [INFO][4383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:07.965 [INFO][4383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.023 [INFO][4383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.033 [INFO][4383] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.040 [INFO][4383] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.042 [INFO][4383] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.045 [INFO][4383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.045 [INFO][4383] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.046 [INFO][4383] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.051 [INFO][4383] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.057 [INFO][4383] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.057 [INFO][4383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" host="localhost" Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.057 [INFO][4383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:08.084732 containerd[1489]: 2025-09-11 00:02:08.057 [INFO][4383] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" HandleID="k8s-pod-network.922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Workload="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.060 [INFO][4352] cni-plugin/k8s.go 418: Populated endpoint ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0", GenerateName:"calico-kube-controllers-76bf8dddf-", Namespace:"calico-system", SelfLink:"", UID:"0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76bf8dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-76bf8dddf-rcn6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95759dec736", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.060 [INFO][4352] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.060 [INFO][4352] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95759dec736 ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.062 [INFO][4352] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.063 [INFO][4352] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0", GenerateName:"calico-kube-controllers-76bf8dddf-", Namespace:"calico-system", SelfLink:"", UID:"0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76bf8dddf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba", Pod:"calico-kube-controllers-76bf8dddf-rcn6l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali95759dec736", MAC:"92:e0:40:8a:4b:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:08.085259 containerd[1489]: 2025-09-11 00:02:08.080 [INFO][4352] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" Namespace="calico-system" Pod="calico-kube-controllers-76bf8dddf-rcn6l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--76bf8dddf--rcn6l-eth0" Sep 11 00:02:08.086746 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:08.112996 containerd[1489]: time="2025-09-11T00:02:08.112956557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nr8lq,Uid:40a8c81e-f60b-4872-94e9-5d392a0fc45a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2\"" Sep 11 00:02:08.114726 kubelet[2620]: E0911 00:02:08.114695 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:08.121519 containerd[1489]: time="2025-09-11T00:02:08.121478170Z" level=info msg="CreateContainer within sandbox \"3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:02:08.125972 containerd[1489]: time="2025-09-11T00:02:08.125927801Z" level=info msg="connecting to shim 922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba" address="unix:///run/containerd/s/0fed4c79ad5b749e37418574c3388a18b96cbb241c5a2226dbbf07bb2b416458" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:08.133367 containerd[1489]: time="2025-09-11T00:02:08.133319106Z" level=info msg="Container 3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:08.139281 containerd[1489]: time="2025-09-11T00:02:08.139156212Z" level=info msg="CreateContainer within sandbox \"3078184bcf44b17fe23419ce3fa2ddbdfe55b63a6c1f57128f33a8d43c305ad2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47\"" Sep 11 00:02:08.140467 containerd[1489]: time="2025-09-11T00:02:08.140039074Z" level=info msg="StartContainer for \"3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47\"" Sep 11 00:02:08.141959 containerd[1489]: time="2025-09-11T00:02:08.141920481Z" level=info msg="connecting to shim 3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47" address="unix:///run/containerd/s/60709c60c1976f991bb1d0f35ff6358bc74e4190e05aa28984424ba66439e15b" protocol=ttrpc version=3 Sep 11 00:02:08.151523 systemd[1]: Started cri-containerd-922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba.scope - libcontainer container 922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba. Sep 11 00:02:08.159643 systemd[1]: Started cri-containerd-3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47.scope - libcontainer container 3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47. Sep 11 00:02:08.166245 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:08.196955 containerd[1489]: time="2025-09-11T00:02:08.196846575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76bf8dddf-rcn6l,Uid:0bd1b13b-4632-4dc3-aa62-3c5ffa8e3394,Namespace:calico-system,Attempt:0,} returns sandbox id \"922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba\"" Sep 11 00:02:08.199234 containerd[1489]: time="2025-09-11T00:02:08.198687661Z" level=info msg="StartContainer for \"3062a7444a67caf38e80069570a3e06744557934aa10a3f62ec9cd0082074d47\" returns successfully" Sep 11 00:02:08.199234 containerd[1489]: time="2025-09-11T00:02:08.198833424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 11 00:02:08.764141 kubelet[2620]: E0911 00:02:08.764094 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:08.765687 containerd[1489]: time="2025-09-11T00:02:08.764299647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-skx4x,Uid:f383ef0f-3532-4ba3-ae25-9cf70a1b5786,Namespace:calico-system,Attempt:0,}" Sep 11 00:02:08.765687 containerd[1489]: time="2025-09-11T00:02:08.764392530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7mv,Uid:7a6a2490-ae7e-4043-896c-7b7c2a641add,Namespace:kube-system,Attempt:0,}" Sep 11 00:02:08.892577 systemd-networkd[1423]: calic0338963403: Link UP Sep 11 00:02:08.892873 systemd-networkd[1423]: calic0338963403: Gained carrier Sep 11 00:02:08.906320 kubelet[2620]: E0911 00:02:08.906296 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.810 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0 coredns-674b8bbfcf- kube-system 7a6a2490-ae7e-4043-896c-7b7c2a641add 876 0 2025-09-11 00:01:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-5h7mv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0338963403 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.810 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.845 [INFO][4575] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" HandleID="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Workload="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.846 [INFO][4575] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" HandleID="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Workload="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-5h7mv", "timestamp":"2025-09-11 00:02:08.845929489 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.846 [INFO][4575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.846 [INFO][4575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.846 [INFO][4575] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.857 [INFO][4575] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.863 [INFO][4575] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.868 [INFO][4575] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.870 [INFO][4575] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.872 [INFO][4575] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.872 [INFO][4575] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.874 [INFO][4575] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9 Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.878 [INFO][4575] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.883 [INFO][4575] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.883 [INFO][4575] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" host="localhost" Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.884 [INFO][4575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:08.908890 containerd[1489]: 2025-09-11 00:02:08.884 [INFO][4575] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" HandleID="k8s-pod-network.0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Workload="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.887 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a6a2490-ae7e-4043-896c-7b7c2a641add", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-5h7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0338963403", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.887 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.888 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0338963403 ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.894 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.896 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7a6a2490-ae7e-4043-896c-7b7c2a641add", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9", Pod:"coredns-674b8bbfcf-5h7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0338963403", MAC:"ea:ba:ee:7c:00:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:08.909461 containerd[1489]: 2025-09-11 00:02:08.904 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" Namespace="kube-system" Pod="coredns-674b8bbfcf-5h7mv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--5h7mv-eth0" Sep 11 00:02:08.923669 kubelet[2620]: I0911 00:02:08.923612 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nr8lq" podStartSLOduration=37.923595032 podStartE2EDuration="37.923595032s" podCreationTimestamp="2025-09-11 00:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:02:08.921671504 +0000 UTC m=+44.264143577" watchObservedRunningTime="2025-09-11 00:02:08.923595032 +0000 UTC m=+44.266067105" Sep 11 00:02:08.955649 containerd[1489]: time="2025-09-11T00:02:08.955474629Z" level=info msg="connecting to shim 0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9" address="unix:///run/containerd/s/b2317a247072f4f9285b967e338d6dc830e61aec0a6d266e7065171b3fc8190a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:08.992758 systemd[1]: Started cri-containerd-0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9.scope - libcontainer container 0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9. Sep 11 00:02:08.998411 systemd-networkd[1423]: cali56390ed783c: Link UP Sep 11 00:02:08.998860 systemd-networkd[1423]: cali56390ed783c: Gained carrier Sep 11 00:02:09.011424 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.813 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--skx4x-eth0 goldmane-54d579b49d- calico-system f383ef0f-3532-4ba3-ae25-9cf70a1b5786 879 0 2025-09-11 00:01:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-skx4x eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali56390ed783c [] [] }} ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.813 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.862 [INFO][4582] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" HandleID="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Workload="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.863 [INFO][4582] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" HandleID="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Workload="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-skx4x", "timestamp":"2025-09-11 00:02:08.862926474 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.863 [INFO][4582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.884 [INFO][4582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.884 [INFO][4582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.958 [INFO][4582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.964 [INFO][4582] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.973 [INFO][4582] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.976 [INFO][4582] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.979 [INFO][4582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.979 [INFO][4582] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.980 [INFO][4582] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200 Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.983 [INFO][4582] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.989 [INFO][4582] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.989 [INFO][4582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" host="localhost" Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.989 [INFO][4582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:09.021632 containerd[1489]: 2025-09-11 00:02:08.989 [INFO][4582] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" HandleID="k8s-pod-network.d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Workload="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:08.995 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--skx4x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f383ef0f-3532-4ba3-ae25-9cf70a1b5786", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-skx4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56390ed783c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:08.995 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:08.995 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56390ed783c ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:08.998 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:09.000 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--skx4x-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"f383ef0f-3532-4ba3-ae25-9cf70a1b5786", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200", Pod:"goldmane-54d579b49d-skx4x", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali56390ed783c", MAC:"8e:ee:41:db:94:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:09.022087 containerd[1489]: 2025-09-11 00:02:09.013 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" Namespace="calico-system" Pod="goldmane-54d579b49d-skx4x" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--skx4x-eth0" Sep 11 00:02:09.042423 containerd[1489]: time="2025-09-11T00:02:09.042370496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-5h7mv,Uid:7a6a2490-ae7e-4043-896c-7b7c2a641add,Namespace:kube-system,Attempt:0,} returns sandbox id \"0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9\"" Sep 11 00:02:09.043410 kubelet[2620]: E0911 00:02:09.043334 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:09.046986 containerd[1489]: time="2025-09-11T00:02:09.046871485Z" level=info msg="connecting to shim d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200" address="unix:///run/containerd/s/7941f88bdc01c201488f5ec8d25f5be65fc23ecc80e1b38bf3e684378eac4e02" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:09.049642 containerd[1489]: time="2025-09-11T00:02:09.049467309Z" level=info msg="CreateContainer within sandbox \"0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 00:02:09.060996 containerd[1489]: time="2025-09-11T00:02:09.060967269Z" level=info msg="Container d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:09.067923 containerd[1489]: time="2025-09-11T00:02:09.067889078Z" level=info msg="CreateContainer within sandbox \"0edaf42d12d5de12ec7de889bf1a604614462da085d739a8e3687767154af4f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1\"" Sep 11 00:02:09.068934 containerd[1489]: time="2025-09-11T00:02:09.068828300Z" level=info msg="StartContainer for \"d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1\"" Sep 11 00:02:09.070737 containerd[1489]: time="2025-09-11T00:02:09.070667145Z" level=info msg="connecting to shim d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1" address="unix:///run/containerd/s/b2317a247072f4f9285b967e338d6dc830e61aec0a6d266e7065171b3fc8190a" protocol=ttrpc version=3 Sep 11 00:02:09.074517 systemd[1]: Started cri-containerd-d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200.scope - libcontainer container d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200. Sep 11 00:02:09.097517 systemd[1]: Started cri-containerd-d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1.scope - libcontainer container d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1. Sep 11 00:02:09.102396 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:09.132210 containerd[1489]: time="2025-09-11T00:02:09.132167964Z" level=info msg="StartContainer for \"d63345a1deedaa3c60b5f639c6c958b5b5c7d7097fbe1a9eed9c2ac114e079c1\" returns successfully" Sep 11 00:02:09.149623 containerd[1489]: time="2025-09-11T00:02:09.149579669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-skx4x,Uid:f383ef0f-3532-4ba3-ae25-9cf70a1b5786,Namespace:calico-system,Attempt:0,} returns sandbox id \"d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200\"" Sep 11 00:02:09.339571 systemd-networkd[1423]: caliee9dac82ca7: Gained IPv6LL Sep 11 00:02:09.531538 systemd-networkd[1423]: cali95759dec736: Gained IPv6LL Sep 11 00:02:09.764547 containerd[1489]: time="2025-09-11T00:02:09.764503097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9nz7,Uid:3328d16f-81a3-4c76-9945-1acaf3146893,Namespace:calico-system,Attempt:0,}" Sep 11 00:02:09.764890 containerd[1489]: time="2025-09-11T00:02:09.764855705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-kfnlz,Uid:c95fb5c4-34dc-4005-8a7c-cc82b8b444ba,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:02:09.914469 systemd-networkd[1423]: calife4a497397e: Link UP Sep 11 00:02:09.915445 systemd-networkd[1423]: calife4a497397e: Gained carrier Sep 11 00:02:09.917200 kubelet[2620]: E0911 00:02:09.917165 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:09.920413 kubelet[2620]: E0911 00:02:09.918319 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:09.934801 kubelet[2620]: I0911 00:02:09.934722 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-5h7mv" podStartSLOduration=38.934703205 podStartE2EDuration="38.934703205s" podCreationTimestamp="2025-09-11 00:01:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 00:02:09.932004579 +0000 UTC m=+45.274476652" watchObservedRunningTime="2025-09-11 00:02:09.934703205 +0000 UTC m=+45.277175238" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.816 [INFO][4749] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c9nz7-eth0 csi-node-driver- calico-system 3328d16f-81a3-4c76-9945-1acaf3146893 769 0 2025-09-11 00:01:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c9nz7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife4a497397e [] [] }} ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.816 [INFO][4749] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.859 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" HandleID="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Workload="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.859 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" HandleID="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Workload="localhost-k8s-csi--node--driver--c9nz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005129d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c9nz7", "timestamp":"2025-09-11 00:02:09.859311647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.859 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.859 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.859 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.871 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.877 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.882 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.887 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.890 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.890 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.891 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02 Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.897 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.902 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.902 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" host="localhost" Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.902 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:09.939914 containerd[1489]: 2025-09-11 00:02:09.902 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" HandleID="k8s-pod-network.88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Workload="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.905 [INFO][4749] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c9nz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3328d16f-81a3-4c76-9945-1acaf3146893", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c9nz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife4a497397e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.906 [INFO][4749] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.906 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife4a497397e ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.915 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.916 [INFO][4749] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c9nz7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3328d16f-81a3-4c76-9945-1acaf3146893", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02", Pod:"csi-node-driver-c9nz7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife4a497397e", MAC:"7a:2f:f5:12:5f:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:09.941234 containerd[1489]: 2025-09-11 00:02:09.934 [INFO][4749] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" Namespace="calico-system" Pod="csi-node-driver-c9nz7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c9nz7-eth0" Sep 11 00:02:09.994130 containerd[1489]: time="2025-09-11T00:02:09.993737204Z" level=info msg="connecting to shim 88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02" address="unix:///run/containerd/s/5b3f2afab9a326d91d3af3ba6c05e73ee307378abfa3d250e427c347d787370a" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:10.020959 systemd-networkd[1423]: cali44d53aca661: Link UP Sep 11 00:02:10.022668 systemd-networkd[1423]: cali44d53aca661: Gained carrier Sep 11 00:02:10.023538 systemd[1]: Started cri-containerd-88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02.scope - libcontainer container 88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02. Sep 11 00:02:10.039293 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.834 [INFO][4762] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0 calico-apiserver-7c8f858dd5- calico-apiserver c95fb5c4-34dc-4005-8a7c-cc82b8b444ba 872 0 2025-09-11 00:01:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8f858dd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c8f858dd5-kfnlz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali44d53aca661 [] [] }} ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.835 [INFO][4762] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.879 [INFO][4786] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" HandleID="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.879 [INFO][4786] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" HandleID="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a0470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c8f858dd5-kfnlz", "timestamp":"2025-09-11 00:02:09.879383777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.879 [INFO][4786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.903 [INFO][4786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.903 [INFO][4786] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.972 [INFO][4786] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.981 [INFO][4786] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.987 [INFO][4786] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.990 [INFO][4786] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.992 [INFO][4786] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.992 [INFO][4786] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:09.995 [INFO][4786] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7 Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:10.003 [INFO][4786] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:10.011 [INFO][4786] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:10.011 [INFO][4786] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" host="localhost" Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:10.011 [INFO][4786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:10.042784 containerd[1489]: 2025-09-11 00:02:10.011 [INFO][4786] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" HandleID="k8s-pod-network.d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.015 [INFO][4762] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0", GenerateName:"calico-apiserver-7c8f858dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95fb5c4-34dc-4005-8a7c-cc82b8b444ba", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8f858dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c8f858dd5-kfnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44d53aca661", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.015 [INFO][4762] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.015 [INFO][4762] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali44d53aca661 ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.023 [INFO][4762] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.026 [INFO][4762] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0", GenerateName:"calico-apiserver-7c8f858dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"c95fb5c4-34dc-4005-8a7c-cc82b8b444ba", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8f858dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7", Pod:"calico-apiserver-7c8f858dd5-kfnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali44d53aca661", MAC:"c6:0c:c4:b6:3f:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:10.043732 containerd[1489]: 2025-09-11 00:02:10.036 [INFO][4762] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-kfnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--kfnlz-eth0" Sep 11 00:02:10.061088 containerd[1489]: time="2025-09-11T00:02:10.059824379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c9nz7,Uid:3328d16f-81a3-4c76-9945-1acaf3146893,Namespace:calico-system,Attempt:0,} returns sandbox id \"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02\"" Sep 11 00:02:10.068776 containerd[1489]: time="2025-09-11T00:02:10.068741351Z" level=info msg="connecting to shim d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7" address="unix:///run/containerd/s/9520f9067218cc787bfb31b24ddefeeb574749515bea653d8b638afcf42d2920" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:10.092502 systemd[1]: Started cri-containerd-d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7.scope - libcontainer container d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7. Sep 11 00:02:10.109862 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:10.133210 containerd[1489]: time="2025-09-11T00:02:10.133104642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-kfnlz,Uid:c95fb5c4-34dc-4005-8a7c-cc82b8b444ba,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7\"" Sep 11 00:02:10.430810 containerd[1489]: time="2025-09-11T00:02:10.430764079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:10.431507 containerd[1489]: time="2025-09-11T00:02:10.431484576Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 11 00:02:10.432152 containerd[1489]: time="2025-09-11T00:02:10.432132191Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:10.434625 containerd[1489]: time="2025-09-11T00:02:10.434582249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:10.435495 containerd[1489]: time="2025-09-11T00:02:10.435460550Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.236596885s" Sep 11 00:02:10.435535 containerd[1489]: time="2025-09-11T00:02:10.435500431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 11 00:02:10.436299 containerd[1489]: time="2025-09-11T00:02:10.436274810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 11 00:02:10.444891 containerd[1489]: time="2025-09-11T00:02:10.444858054Z" level=info msg="CreateContainer within sandbox \"922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 11 00:02:10.450953 containerd[1489]: time="2025-09-11T00:02:10.450911278Z" level=info msg="Container b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:10.460004 containerd[1489]: time="2025-09-11T00:02:10.459951253Z" level=info msg="CreateContainer within sandbox \"922b7aea57abde074bbd63f2931657a165991933f489ed79c2861dfad1f236ba\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525\"" Sep 11 00:02:10.460594 containerd[1489]: time="2025-09-11T00:02:10.460513946Z" level=info msg="StartContainer for \"b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525\"" Sep 11 00:02:10.461899 containerd[1489]: time="2025-09-11T00:02:10.461861098Z" level=info msg="connecting to shim b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525" address="unix:///run/containerd/s/0fed4c79ad5b749e37418574c3388a18b96cbb241c5a2226dbbf07bb2b416458" protocol=ttrpc version=3 Sep 11 00:02:10.482524 systemd[1]: Started cri-containerd-b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525.scope - libcontainer container b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525. Sep 11 00:02:10.515448 containerd[1489]: time="2025-09-11T00:02:10.515373970Z" level=info msg="StartContainer for \"b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525\" returns successfully" Sep 11 00:02:10.555470 systemd-networkd[1423]: calic0338963403: Gained IPv6LL Sep 11 00:02:10.763927 containerd[1489]: time="2025-09-11T00:02:10.763818157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-sqh2g,Uid:bbff6af1-093f-4daa-9016-0543d9ff1727,Namespace:calico-apiserver,Attempt:0,}" Sep 11 00:02:10.865584 systemd-networkd[1423]: cali31ae19d7ef4: Link UP Sep 11 00:02:10.865867 systemd-networkd[1423]: cali31ae19d7ef4: Gained carrier Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.800 [INFO][4952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0 calico-apiserver-7c8f858dd5- calico-apiserver bbff6af1-093f-4daa-9016-0543d9ff1727 875 0 2025-09-11 00:01:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c8f858dd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7c8f858dd5-sqh2g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali31ae19d7ef4 [] [] }} ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.800 [INFO][4952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.823 [INFO][4965] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" HandleID="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.823 [INFO][4965] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" HandleID="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000590ad0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7c8f858dd5-sqh2g", "timestamp":"2025-09-11 00:02:10.823086207 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.823 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.823 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.823 [INFO][4965] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.837 [INFO][4965] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.841 [INFO][4965] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.845 [INFO][4965] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.847 [INFO][4965] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.850 [INFO][4965] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.850 [INFO][4965] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.851 [INFO][4965] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6 Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.855 [INFO][4965] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.860 [INFO][4965] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.860 [INFO][4965] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" host="localhost" Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.860 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 11 00:02:10.881247 containerd[1489]: 2025-09-11 00:02:10.860 [INFO][4965] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" HandleID="k8s-pod-network.6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Workload="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.863 [INFO][4952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0", GenerateName:"calico-apiserver-7c8f858dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff6af1-093f-4daa-9016-0543d9ff1727", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8f858dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7c8f858dd5-sqh2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31ae19d7ef4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.864 [INFO][4952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.864 [INFO][4952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31ae19d7ef4 ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.866 [INFO][4952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.867 [INFO][4952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0", GenerateName:"calico-apiserver-7c8f858dd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bbff6af1-093f-4daa-9016-0543d9ff1727", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.September, 11, 0, 1, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c8f858dd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6", Pod:"calico-apiserver-7c8f858dd5-sqh2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali31ae19d7ef4", MAC:"6e:32:3d:9e:25:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 11 00:02:10.881845 containerd[1489]: 2025-09-11 00:02:10.878 [INFO][4952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" Namespace="calico-apiserver" Pod="calico-apiserver-7c8f858dd5-sqh2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7c8f858dd5--sqh2g-eth0" Sep 11 00:02:10.903277 containerd[1489]: time="2025-09-11T00:02:10.903231672Z" level=info msg="connecting to shim 6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6" address="unix:///run/containerd/s/2e43ff6f1aa44979240557f4f56a9ac84cfa25e0083206fd7d95058dba3274e3" namespace=k8s.io protocol=ttrpc version=3 Sep 11 00:02:10.925367 kubelet[2620]: E0911 00:02:10.925324 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:10.931884 kubelet[2620]: E0911 00:02:10.931857 2620 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 00:02:10.932549 systemd[1]: Started cri-containerd-6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6.scope - libcontainer container 6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6. Sep 11 00:02:10.938338 kubelet[2620]: I0911 00:02:10.938264 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76bf8dddf-rcn6l" podStartSLOduration=20.700338588 podStartE2EDuration="22.938113021s" podCreationTimestamp="2025-09-11 00:01:48 +0000 UTC" firstStartedPulling="2025-09-11 00:02:08.198399654 +0000 UTC m=+43.540871727" lastFinishedPulling="2025-09-11 00:02:10.436174127 +0000 UTC m=+45.778646160" observedRunningTime="2025-09-11 00:02:10.937905617 +0000 UTC m=+46.280377690" watchObservedRunningTime="2025-09-11 00:02:10.938113021 +0000 UTC m=+46.280585054" Sep 11 00:02:10.947470 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 00:02:10.969034 containerd[1489]: time="2025-09-11T00:02:10.968991676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c8f858dd5-sqh2g,Uid:bbff6af1-093f-4daa-9016-0543d9ff1727,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6\"" Sep 11 00:02:11.003750 systemd-networkd[1423]: cali56390ed783c: Gained IPv6LL Sep 11 00:02:11.744730 systemd[1]: Started sshd@8-10.0.0.103:22-10.0.0.1:58468.service - OpenSSH per-connection server daemon (10.0.0.1:58468). Sep 11 00:02:11.931200 sshd[5038]: Accepted publickey for core from 10.0.0.1 port 58468 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:11.934117 sshd-session[5038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:11.939477 systemd-logind[1467]: New session 9 of user core. Sep 11 00:02:11.947554 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 00:02:11.964689 systemd-networkd[1423]: calife4a497397e: Gained IPv6LL Sep 11 00:02:11.980146 containerd[1489]: time="2025-09-11T00:02:11.980112007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2cb8f412e549568d281a1914211c481398cbc2fecf342338ac9743b88c05525\" id:\"a7026dae01023e59165e492360af9f868eea711d6d08fda50834f93d32dff2ee\" pid:5057 exited_at:{seconds:1757548931 nanos:979320589}" Sep 11 00:02:12.027569 systemd-networkd[1423]: cali44d53aca661: Gained IPv6LL Sep 11 00:02:12.155679 systemd-networkd[1423]: cali31ae19d7ef4: Gained IPv6LL Sep 11 00:02:12.193161 sshd[5050]: Connection closed by 10.0.0.1 port 58468 Sep 11 00:02:12.193554 sshd-session[5038]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:12.198975 systemd[1]: sshd@8-10.0.0.103:22-10.0.0.1:58468.service: Deactivated successfully. Sep 11 00:02:12.202524 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 00:02:12.204186 systemd-logind[1467]: Session 9 logged out. Waiting for processes to exit. Sep 11 00:02:12.206466 systemd-logind[1467]: Removed session 9. Sep 11 00:02:12.289753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534112764.mount: Deactivated successfully. Sep 11 00:02:12.658004 containerd[1489]: time="2025-09-11T00:02:12.657779555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:12.659048 containerd[1489]: time="2025-09-11T00:02:12.658945141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 11 00:02:12.660107 containerd[1489]: time="2025-09-11T00:02:12.660065766Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:12.663103 containerd[1489]: time="2025-09-11T00:02:12.663069995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:12.663508 containerd[1489]: time="2025-09-11T00:02:12.663474764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.227170834s" Sep 11 00:02:12.663508 containerd[1489]: time="2025-09-11T00:02:12.663504524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 11 00:02:12.665274 containerd[1489]: time="2025-09-11T00:02:12.665205603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 11 00:02:12.670003 containerd[1489]: time="2025-09-11T00:02:12.669967871Z" level=info msg="CreateContainer within sandbox \"d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 11 00:02:12.677894 containerd[1489]: time="2025-09-11T00:02:12.677838490Z" level=info msg="Container e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:12.686835 containerd[1489]: time="2025-09-11T00:02:12.686711851Z" level=info msg="CreateContainer within sandbox \"d057fe2e0db094705555e066413e2f1f49d3bc61676ce8bf70ec91276c604200\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\"" Sep 11 00:02:12.687259 containerd[1489]: time="2025-09-11T00:02:12.687227543Z" level=info msg="StartContainer for \"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\"" Sep 11 00:02:12.691227 containerd[1489]: time="2025-09-11T00:02:12.691192313Z" level=info msg="connecting to shim e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026" address="unix:///run/containerd/s/7941f88bdc01c201488f5ec8d25f5be65fc23ecc80e1b38bf3e684378eac4e02" protocol=ttrpc version=3 Sep 11 00:02:12.709521 systemd[1]: Started cri-containerd-e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026.scope - libcontainer container e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026. Sep 11 00:02:12.743554 containerd[1489]: time="2025-09-11T00:02:12.743507220Z" level=info msg="StartContainer for \"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\" returns successfully" Sep 11 00:02:12.945004 kubelet[2620]: I0911 00:02:12.944945 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-skx4x" podStartSLOduration=22.433315674 podStartE2EDuration="25.94492771s" podCreationTimestamp="2025-09-11 00:01:47 +0000 UTC" firstStartedPulling="2025-09-11 00:02:09.153433723 +0000 UTC m=+44.495905796" lastFinishedPulling="2025-09-11 00:02:12.665045799 +0000 UTC m=+48.007517832" observedRunningTime="2025-09-11 00:02:12.94447514 +0000 UTC m=+48.286947213" watchObservedRunningTime="2025-09-11 00:02:12.94492771 +0000 UTC m=+48.287399783" Sep 11 00:02:13.938947 containerd[1489]: time="2025-09-11T00:02:13.938902482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:13.939867 containerd[1489]: time="2025-09-11T00:02:13.939842143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 11 00:02:13.940667 containerd[1489]: time="2025-09-11T00:02:13.940634560Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:13.942561 containerd[1489]: time="2025-09-11T00:02:13.942521082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:13.943418 containerd[1489]: time="2025-09-11T00:02:13.943382421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.278136137s" Sep 11 00:02:13.943418 containerd[1489]: time="2025-09-11T00:02:13.943411582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 11 00:02:13.944424 containerd[1489]: time="2025-09-11T00:02:13.944394484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:02:13.949810 containerd[1489]: time="2025-09-11T00:02:13.949782323Z" level=info msg="CreateContainer within sandbox \"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 11 00:02:13.960715 containerd[1489]: time="2025-09-11T00:02:13.960618004Z" level=info msg="Container 95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:13.962783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount551724291.mount: Deactivated successfully. Sep 11 00:02:13.969955 containerd[1489]: time="2025-09-11T00:02:13.969890970Z" level=info msg="CreateContainer within sandbox \"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024\"" Sep 11 00:02:13.970471 containerd[1489]: time="2025-09-11T00:02:13.970371540Z" level=info msg="StartContainer for \"95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024\"" Sep 11 00:02:13.973998 containerd[1489]: time="2025-09-11T00:02:13.973956860Z" level=info msg="connecting to shim 95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024" address="unix:///run/containerd/s/5b3f2afab9a326d91d3af3ba6c05e73ee307378abfa3d250e427c347d787370a" protocol=ttrpc version=3 Sep 11 00:02:14.055282 containerd[1489]: time="2025-09-11T00:02:14.055042835Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\" id:\"76021ac42a71207fc301a052dc926355e3cf6b8a161dfe37931e51113ec8f060\" pid:5137 exit_status:1 exited_at:{seconds:1757548934 nanos:54367940}" Sep 11 00:02:14.104592 systemd[1]: Started cri-containerd-95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024.scope - libcontainer container 95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024. Sep 11 00:02:14.140061 containerd[1489]: time="2025-09-11T00:02:14.139951880Z" level=info msg="StartContainer for \"95a06003dce6d6c2ce90582e0db9f18eb4880b8f255a08dd2b884f323682c024\" returns successfully" Sep 11 00:02:15.022587 containerd[1489]: time="2025-09-11T00:02:15.022507654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\" id:\"3e0b6b8bd8a4ccc7543cf099111aab74b4eef69660a5b7f90e449d939b6fea27\" pid:5194 exit_status:1 exited_at:{seconds:1757548935 nanos:22163567}" Sep 11 00:02:15.849865 containerd[1489]: time="2025-09-11T00:02:15.849813519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:15.850466 containerd[1489]: time="2025-09-11T00:02:15.850365531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 11 00:02:15.851208 containerd[1489]: time="2025-09-11T00:02:15.851175748Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:15.853727 containerd[1489]: time="2025-09-11T00:02:15.853430196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:15.854282 containerd[1489]: time="2025-09-11T00:02:15.854251494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.909821689s" Sep 11 00:02:15.854282 containerd[1489]: time="2025-09-11T00:02:15.854280014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 00:02:15.863477 containerd[1489]: time="2025-09-11T00:02:15.863327527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 11 00:02:15.867141 containerd[1489]: time="2025-09-11T00:02:15.867097087Z" level=info msg="CreateContainer within sandbox \"d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:02:15.875534 containerd[1489]: time="2025-09-11T00:02:15.875494186Z" level=info msg="Container 890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:15.884905 containerd[1489]: time="2025-09-11T00:02:15.884852345Z" level=info msg="CreateContainer within sandbox \"d206f6c61559018036909fc1a755d886ff63c86d33a0de846fb3466b0aed51e7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611\"" Sep 11 00:02:15.885448 containerd[1489]: time="2025-09-11T00:02:15.885421078Z" level=info msg="StartContainer for \"890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611\"" Sep 11 00:02:15.886769 containerd[1489]: time="2025-09-11T00:02:15.886733506Z" level=info msg="connecting to shim 890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611" address="unix:///run/containerd/s/9520f9067218cc787bfb31b24ddefeeb574749515bea653d8b638afcf42d2920" protocol=ttrpc version=3 Sep 11 00:02:15.908516 systemd[1]: Started cri-containerd-890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611.scope - libcontainer container 890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611. Sep 11 00:02:15.955910 containerd[1489]: time="2025-09-11T00:02:15.955872698Z" level=info msg="StartContainer for \"890e303842ed2caac6ce8470ee95f389980d076443f8f0b340a4bc9deeaf7611\" returns successfully" Sep 11 00:02:16.282246 containerd[1489]: time="2025-09-11T00:02:16.282196296Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:16.282886 containerd[1489]: time="2025-09-11T00:02:16.282841629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 11 00:02:16.284898 containerd[1489]: time="2025-09-11T00:02:16.284856272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 421.461743ms" Sep 11 00:02:16.285072 containerd[1489]: time="2025-09-11T00:02:16.284884032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 11 00:02:16.285778 containerd[1489]: time="2025-09-11T00:02:16.285758650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 11 00:02:16.289775 containerd[1489]: time="2025-09-11T00:02:16.289747414Z" level=info msg="CreateContainer within sandbox \"6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 11 00:02:16.298230 containerd[1489]: time="2025-09-11T00:02:16.296826962Z" level=info msg="Container 9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:16.306101 containerd[1489]: time="2025-09-11T00:02:16.306066355Z" level=info msg="CreateContainer within sandbox \"6c09aad4bfbf2589455106eb5868a774a1cda89eea91d47776f4b900a96fd7c6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55\"" Sep 11 00:02:16.306501 containerd[1489]: time="2025-09-11T00:02:16.306476483Z" level=info msg="StartContainer for \"9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55\"" Sep 11 00:02:16.307756 containerd[1489]: time="2025-09-11T00:02:16.307707349Z" level=info msg="connecting to shim 9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55" address="unix:///run/containerd/s/2e43ff6f1aa44979240557f4f56a9ac84cfa25e0083206fd7d95058dba3274e3" protocol=ttrpc version=3 Sep 11 00:02:16.329520 systemd[1]: Started cri-containerd-9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55.scope - libcontainer container 9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55. Sep 11 00:02:16.377335 containerd[1489]: time="2025-09-11T00:02:16.377215322Z" level=info msg="StartContainer for \"9c84b9174d5a11514acd4dbd60badfd3521c028f428be3fa7380f100ce9e6f55\" returns successfully" Sep 11 00:02:17.000860 kubelet[2620]: I0911 00:02:17.000531 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c8f858dd5-sqh2g" podStartSLOduration=29.686819885 podStartE2EDuration="35.000512027s" podCreationTimestamp="2025-09-11 00:01:42 +0000 UTC" firstStartedPulling="2025-09-11 00:02:10.971982227 +0000 UTC m=+46.314454300" lastFinishedPulling="2025-09-11 00:02:16.285674369 +0000 UTC m=+51.628146442" observedRunningTime="2025-09-11 00:02:16.98053625 +0000 UTC m=+52.323008323" watchObservedRunningTime="2025-09-11 00:02:17.000512027 +0000 UTC m=+52.342984100" Sep 11 00:02:17.210745 systemd[1]: Started sshd@9-10.0.0.103:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Sep 11 00:02:17.284102 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:17.286043 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:17.291144 systemd-logind[1467]: New session 10 of user core. Sep 11 00:02:17.296669 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 00:02:17.535291 sshd[5300]: Connection closed by 10.0.0.1 port 58482 Sep 11 00:02:17.535558 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:17.555080 systemd[1]: sshd@9-10.0.0.103:22-10.0.0.1:58482.service: Deactivated successfully. Sep 11 00:02:17.559661 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 00:02:17.560878 systemd-logind[1467]: Session 10 logged out. Waiting for processes to exit. Sep 11 00:02:17.569858 systemd[1]: Started sshd@10-10.0.0.103:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Sep 11 00:02:17.575221 systemd-logind[1467]: Removed session 10. Sep 11 00:02:17.662798 sshd[5320]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:17.664417 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:17.668464 systemd-logind[1467]: New session 11 of user core. Sep 11 00:02:17.676549 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 00:02:17.947649 sshd[5326]: Connection closed by 10.0.0.1 port 58490 Sep 11 00:02:17.948027 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:17.959135 systemd[1]: sshd@10-10.0.0.103:22-10.0.0.1:58490.service: Deactivated successfully. Sep 11 00:02:17.961997 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 00:02:17.963558 systemd-logind[1467]: Session 11 logged out. Waiting for processes to exit. Sep 11 00:02:17.966309 systemd[1]: Started sshd@11-10.0.0.103:22-10.0.0.1:58506.service - OpenSSH per-connection server daemon (10.0.0.1:58506). Sep 11 00:02:17.967203 systemd-logind[1467]: Removed session 11. Sep 11 00:02:17.979980 containerd[1489]: time="2025-09-11T00:02:17.979850321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:17.980877 containerd[1489]: time="2025-09-11T00:02:17.980836701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 11 00:02:17.983366 containerd[1489]: time="2025-09-11T00:02:17.982283971Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:17.991370 containerd[1489]: time="2025-09-11T00:02:17.990866307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 00:02:17.992056 containerd[1489]: time="2025-09-11T00:02:17.992000450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.706120717s" Sep 11 00:02:17.992056 containerd[1489]: time="2025-09-11T00:02:17.992046531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 11 00:02:18.000737 containerd[1489]: time="2025-09-11T00:02:18.000676028Z" level=info msg="CreateContainer within sandbox \"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 11 00:02:18.020026 containerd[1489]: time="2025-09-11T00:02:18.019969297Z" level=info msg="Container da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89: CDI devices from CRI Config.CDIDevices: []" Sep 11 00:02:18.047156 containerd[1489]: time="2025-09-11T00:02:18.047106284Z" level=info msg="CreateContainer within sandbox \"88648360964224ca15a4d8b935b2f0d8c12c47c4ee5fdc4ed83d45838e995f02\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89\"" Sep 11 00:02:18.049156 containerd[1489]: time="2025-09-11T00:02:18.048374790Z" level=info msg="StartContainer for \"da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89\"" Sep 11 00:02:18.051426 containerd[1489]: time="2025-09-11T00:02:18.051384970Z" level=info msg="connecting to shim da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89" address="unix:///run/containerd/s/5b3f2afab9a326d91d3af3ba6c05e73ee307378abfa3d250e427c347d787370a" protocol=ttrpc version=3 Sep 11 00:02:18.051922 sshd[5340]: Accepted publickey for core from 10.0.0.1 port 58506 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:18.053719 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:18.066051 systemd-logind[1467]: New session 12 of user core. Sep 11 00:02:18.074622 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 00:02:18.102944 systemd[1]: Started cri-containerd-da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89.scope - libcontainer container da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89. Sep 11 00:02:18.197171 containerd[1489]: time="2025-09-11T00:02:18.196998666Z" level=info msg="StartContainer for \"da8b3fa0fb6b770c242af6790e5598b15f268b034d4fd95dcce21ba8f2c15a89\" returns successfully" Sep 11 00:02:18.274507 sshd[5350]: Connection closed by 10.0.0.1 port 58506 Sep 11 00:02:18.275891 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:18.281424 systemd[1]: sshd@11-10.0.0.103:22-10.0.0.1:58506.service: Deactivated successfully. Sep 11 00:02:18.284126 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 00:02:18.285633 systemd-logind[1467]: Session 12 logged out. Waiting for processes to exit. Sep 11 00:02:18.287133 systemd-logind[1467]: Removed session 12. Sep 11 00:02:18.385575 kubelet[2620]: I0911 00:02:18.385505 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c8f858dd5-kfnlz" podStartSLOduration=30.656460131 podStartE2EDuration="36.384975216s" podCreationTimestamp="2025-09-11 00:01:42 +0000 UTC" firstStartedPulling="2025-09-11 00:02:10.134595277 +0000 UTC m=+45.477067350" lastFinishedPulling="2025-09-11 00:02:15.863110362 +0000 UTC m=+51.205582435" observedRunningTime="2025-09-11 00:02:17.002502348 +0000 UTC m=+52.344974461" watchObservedRunningTime="2025-09-11 00:02:18.384975216 +0000 UTC m=+53.727447289" Sep 11 00:02:18.833090 kubelet[2620]: I0911 00:02:18.833054 2620 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 11 00:02:18.842768 kubelet[2620]: I0911 00:02:18.842731 2620 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 11 00:02:19.009596 kubelet[2620]: I0911 00:02:19.009521 2620 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-c9nz7" podStartSLOduration=24.080560854 podStartE2EDuration="32.009488203s" podCreationTimestamp="2025-09-11 00:01:47 +0000 UTC" firstStartedPulling="2025-09-11 00:02:10.064576172 +0000 UTC m=+45.407048245" lastFinishedPulling="2025-09-11 00:02:17.993503521 +0000 UTC m=+53.335975594" observedRunningTime="2025-09-11 00:02:19.008045455 +0000 UTC m=+54.350517528" watchObservedRunningTime="2025-09-11 00:02:19.009488203 +0000 UTC m=+54.351960276" Sep 11 00:02:20.334693 containerd[1489]: time="2025-09-11T00:02:20.334631092Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e585cee19fd14772e40cc3e80d2efba6c25e4b39e633215d34d9f39856b0d026\" id:\"ccce7e74bef6a5324e544bd39090a07070a82632e7643c32b741eb9f796c8f13\" pid:5405 exited_at:{seconds:1757548940 nanos:334248284}" Sep 11 00:02:23.289943 systemd[1]: Started sshd@12-10.0.0.103:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Sep 11 00:02:23.403732 sshd[5419]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:23.405849 sshd-session[5419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:23.412020 systemd-logind[1467]: New session 13 of user core. Sep 11 00:02:23.423477 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 00:02:23.628449 sshd[5421]: Connection closed by 10.0.0.1 port 36266 Sep 11 00:02:23.628721 sshd-session[5419]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:23.644550 systemd[1]: sshd@12-10.0.0.103:22-10.0.0.1:36266.service: Deactivated successfully. Sep 11 00:02:23.646185 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 00:02:23.646863 systemd-logind[1467]: Session 13 logged out. Waiting for processes to exit. Sep 11 00:02:23.649573 systemd[1]: Started sshd@13-10.0.0.103:22-10.0.0.1:36268.service - OpenSSH per-connection server daemon (10.0.0.1:36268). Sep 11 00:02:23.650227 systemd-logind[1467]: Removed session 13. Sep 11 00:02:23.732902 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 36268 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:23.736023 sshd-session[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:23.741411 systemd-logind[1467]: New session 14 of user core. Sep 11 00:02:23.749535 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 00:02:23.980720 sshd[5437]: Connection closed by 10.0.0.1 port 36268 Sep 11 00:02:23.981030 sshd-session[5435]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:23.988853 systemd[1]: sshd@13-10.0.0.103:22-10.0.0.1:36268.service: Deactivated successfully. Sep 11 00:02:23.991766 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 00:02:23.992357 systemd-logind[1467]: Session 14 logged out. Waiting for processes to exit. Sep 11 00:02:23.995243 systemd[1]: Started sshd@14-10.0.0.103:22-10.0.0.1:36282.service - OpenSSH per-connection server daemon (10.0.0.1:36282). Sep 11 00:02:23.996434 systemd-logind[1467]: Removed session 14. Sep 11 00:02:24.050361 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 36282 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:24.051658 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:24.056187 systemd-logind[1467]: New session 15 of user core. Sep 11 00:02:24.065499 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 00:02:24.660692 sshd[5450]: Connection closed by 10.0.0.1 port 36282 Sep 11 00:02:24.661376 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:24.669695 systemd[1]: sshd@14-10.0.0.103:22-10.0.0.1:36282.service: Deactivated successfully. Sep 11 00:02:24.671700 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 00:02:24.673573 systemd-logind[1467]: Session 15 logged out. Waiting for processes to exit. Sep 11 00:02:24.677810 systemd[1]: Started sshd@15-10.0.0.103:22-10.0.0.1:36294.service - OpenSSH per-connection server daemon (10.0.0.1:36294). Sep 11 00:02:24.679398 systemd-logind[1467]: Removed session 15. Sep 11 00:02:24.734317 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 36294 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:24.735500 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:24.742037 systemd-logind[1467]: New session 16 of user core. Sep 11 00:02:24.749509 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 00:02:25.030176 sshd[5472]: Connection closed by 10.0.0.1 port 36294 Sep 11 00:02:25.030917 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:25.041162 systemd[1]: sshd@15-10.0.0.103:22-10.0.0.1:36294.service: Deactivated successfully. Sep 11 00:02:25.048087 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 00:02:25.051907 systemd-logind[1467]: Session 16 logged out. Waiting for processes to exit. Sep 11 00:02:25.054032 systemd[1]: Started sshd@16-10.0.0.103:22-10.0.0.1:36310.service - OpenSSH per-connection server daemon (10.0.0.1:36310). Sep 11 00:02:25.055883 systemd-logind[1467]: Removed session 16. Sep 11 00:02:25.109082 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 36310 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:25.110780 sshd-session[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:25.115978 systemd-logind[1467]: New session 17 of user core. Sep 11 00:02:25.128554 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 00:02:25.256072 sshd[5487]: Connection closed by 10.0.0.1 port 36310 Sep 11 00:02:25.256410 sshd-session[5485]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:25.259819 systemd[1]: sshd@16-10.0.0.103:22-10.0.0.1:36310.service: Deactivated successfully. Sep 11 00:02:25.261581 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 00:02:25.262288 systemd-logind[1467]: Session 17 logged out. Waiting for processes to exit. Sep 11 00:02:25.263824 systemd-logind[1467]: Removed session 17. Sep 11 00:02:30.279232 systemd[1]: Started sshd@17-10.0.0.103:22-10.0.0.1:40290.service - OpenSSH per-connection server daemon (10.0.0.1:40290). Sep 11 00:02:30.340594 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 40290 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:30.341996 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:30.351621 systemd-logind[1467]: New session 18 of user core. Sep 11 00:02:30.355657 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 00:02:30.480266 sshd[5510]: Connection closed by 10.0.0.1 port 40290 Sep 11 00:02:30.480099 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:30.483881 systemd[1]: sshd@17-10.0.0.103:22-10.0.0.1:40290.service: Deactivated successfully. Sep 11 00:02:30.487058 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 00:02:30.487725 systemd-logind[1467]: Session 18 logged out. Waiting for processes to exit. Sep 11 00:02:30.488996 systemd-logind[1467]: Removed session 18. Sep 11 00:02:32.955146 containerd[1489]: time="2025-09-11T00:02:32.955023629Z" level=info msg="TaskExit event in podsandbox handler container_id:\"09ea663df9192c2db19e8800a47fa97000f057d9ea9cbd9613ea36beabc142a9\" id:\"6e4d49c506ab6c86c1f0d572de808f99aa37f09e60799dd3aa67c8dbcd0a966e\" pid:5540 exited_at:{seconds:1757548952 nanos:954738624}" Sep 11 00:02:35.495714 systemd[1]: Started sshd@18-10.0.0.103:22-10.0.0.1:40294.service - OpenSSH per-connection server daemon (10.0.0.1:40294). Sep 11 00:02:35.579129 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 40294 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 11 00:02:35.580338 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 00:02:35.584058 systemd-logind[1467]: New session 19 of user core. Sep 11 00:02:35.593482 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 00:02:35.727218 sshd[5555]: Connection closed by 10.0.0.1 port 40294 Sep 11 00:02:35.726561 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Sep 11 00:02:35.730520 systemd[1]: sshd@18-10.0.0.103:22-10.0.0.1:40294.service: Deactivated successfully. Sep 11 00:02:35.732339 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 00:02:35.733111 systemd-logind[1467]: Session 19 logged out. Waiting for processes to exit. Sep 11 00:02:35.734163 systemd-logind[1467]: Removed session 19.