Sep 9 23:36:50.828405 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:36:50.828427 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue Sep 9 22:11:11 -00 2025 Sep 9 23:36:50.828437 kernel: KASLR enabled Sep 9 23:36:50.828443 kernel: efi: EFI v2.7 by EDK II Sep 9 23:36:50.828448 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 9 23:36:50.828454 kernel: random: crng init done Sep 9 23:36:50.828460 kernel: secureboot: Secure boot disabled Sep 9 23:36:50.828466 kernel: ACPI: Early table checksum verification disabled Sep 9 23:36:50.828472 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 9 23:36:50.828480 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:36:50.828485 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828491 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828497 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828503 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828510 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828517 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828523 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828530 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828536 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:36:50.828542 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:36:50.828548 kernel: NUMA: Failed to initialise from firmware Sep 9 23:36:50.828554 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:36:50.828560 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 9 23:36:50.828566 kernel: Zone ranges: Sep 9 23:36:50.828572 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:36:50.828579 kernel: DMA32 empty Sep 9 23:36:50.828585 kernel: Normal empty Sep 9 23:36:50.828591 kernel: Movable zone start for each node Sep 9 23:36:50.828597 kernel: Early memory node ranges Sep 9 23:36:50.828603 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 9 23:36:50.828609 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 9 23:36:50.828615 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 9 23:36:50.828621 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 23:36:50.828627 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 23:36:50.828633 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 23:36:50.828639 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 23:36:50.828646 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 23:36:50.828653 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:36:50.828660 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:36:50.828666 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:36:50.828675 kernel: psci: probing for conduit method from ACPI. Sep 9 23:36:50.828682 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:36:50.828688 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:36:50.828696 kernel: psci: Trusted OS migration not required Sep 9 23:36:50.828703 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:36:50.828710 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:36:50.828717 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 23:36:50.828723 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 23:36:50.828730 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:36:50.828736 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:36:50.828743 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:36:50.828749 kernel: CPU features: detected: Hardware dirty bit management Sep 9 23:36:50.828762 kernel: CPU features: detected: Spectre-v4 Sep 9 23:36:50.828771 kernel: CPU features: detected: Spectre-BHB Sep 9 23:36:50.828778 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:36:50.828785 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:36:50.828791 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:36:50.828797 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:36:50.828804 kernel: alternatives: applying boot alternatives Sep 9 23:36:50.828811 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=21f768e38d6f559c285ae64c28cbdad2cb8e0d9191080506cf69923230b56ba0 Sep 9 23:36:50.828818 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:36:50.828825 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:36:50.828831 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:36:50.828838 kernel: Fallback order for Node 0: 0 Sep 9 23:36:50.828845 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 23:36:50.828852 kernel: Policy zone: DMA Sep 9 23:36:50.828858 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:36:50.828865 kernel: software IO TLB: area num 4. Sep 9 23:36:50.828871 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 23:36:50.828878 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Sep 9 23:36:50.828885 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:36:50.828892 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:36:50.828898 kernel: rcu: RCU event tracing is enabled. Sep 9 23:36:50.828905 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:36:50.828912 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:36:50.828918 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:36:50.828926 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:36:50.828933 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:36:50.828940 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:36:50.828946 kernel: GICv3: 256 SPIs implemented Sep 9 23:36:50.828952 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:36:50.828959 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:36:50.828965 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:36:50.828972 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:36:50.828978 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:36:50.828985 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:36:50.828992 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:36:50.829000 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 23:36:50.829006 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 23:36:50.829013 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:36:50.829019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:36:50.829026 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:36:50.829032 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:36:50.829039 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:36:50.829046 kernel: arm-pv: using stolen time PV Sep 9 23:36:50.829052 kernel: Console: colour dummy device 80x25 Sep 9 23:36:50.829059 kernel: ACPI: Core revision 20230628 Sep 9 23:36:50.829066 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:36:50.829075 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:36:50.829081 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 23:36:50.829107 kernel: landlock: Up and running. Sep 9 23:36:50.829114 kernel: SELinux: Initializing. Sep 9 23:36:50.829121 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:36:50.829128 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:36:50.829135 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:36:50.829141 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:36:50.829148 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:36:50.829157 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:36:50.829164 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 23:36:50.829170 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 23:36:50.829177 kernel: Remapping and enabling EFI services. Sep 9 23:36:50.829184 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:36:50.829190 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:36:50.829197 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:36:50.829204 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 23:36:50.829210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:36:50.829219 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:36:50.829226 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:36:50.829243 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:36:50.829251 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 23:36:50.829258 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:36:50.829265 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:36:50.829272 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:36:50.829279 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:36:50.829286 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 23:36:50.829295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:36:50.829302 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:36:50.829309 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:36:50.829316 kernel: SMP: Total of 4 processors activated. Sep 9 23:36:50.829323 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:36:50.829330 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:36:50.829337 kernel: CPU features: detected: Common not Private translations Sep 9 23:36:50.829344 kernel: CPU features: detected: CRC32 instructions Sep 9 23:36:50.829352 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:36:50.829359 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:36:50.829366 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:36:50.829373 kernel: CPU features: detected: Privileged Access Never Sep 9 23:36:50.829381 kernel: CPU features: detected: RAS Extension Support Sep 9 23:36:50.829388 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:36:50.829395 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:36:50.829402 kernel: alternatives: applying system-wide alternatives Sep 9 23:36:50.829408 kernel: devtmpfs: initialized Sep 9 23:36:50.829416 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:36:50.829424 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:36:50.829431 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:36:50.829438 kernel: SMBIOS 3.0.0 present. Sep 9 23:36:50.829445 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:36:50.829452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:36:50.829459 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:36:50.829466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:36:50.829474 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:36:50.829482 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:36:50.829489 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 9 23:36:50.829496 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:36:50.829503 kernel: cpuidle: using governor menu Sep 9 23:36:50.829510 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:36:50.829517 kernel: ASID allocator initialised with 32768 entries Sep 9 23:36:50.829524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:36:50.829531 kernel: Serial: AMBA PL011 UART driver Sep 9 23:36:50.829538 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:36:50.829547 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 23:36:50.829554 kernel: Modules: 509248 pages in range for PLT usage Sep 9 23:36:50.829575 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:36:50.829583 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:36:50.829590 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:36:50.829597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:36:50.829605 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:36:50.829611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:36:50.829618 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:36:50.829627 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:36:50.829634 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:36:50.829641 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:36:50.829648 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:36:50.829655 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:36:50.829662 kernel: ACPI: Interpreter enabled Sep 9 23:36:50.829669 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:36:50.829676 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:36:50.829683 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:36:50.829691 kernel: printk: console [ttyAMA0] enabled Sep 9 23:36:50.829699 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:36:50.829882 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:36:50.829961 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:36:50.830025 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:36:50.830116 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:36:50.830185 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:36:50.830194 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:36:50.830205 kernel: PCI host bridge to bus 0000:00 Sep 9 23:36:50.830274 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:36:50.830337 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:36:50.830394 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:36:50.830451 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:36:50.830532 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 23:36:50.830611 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 23:36:50.830680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 23:36:50.830747 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 23:36:50.830832 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:36:50.830900 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:36:50.830965 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 23:36:50.831030 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 23:36:50.831107 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:36:50.831168 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:36:50.831229 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:36:50.831239 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:36:50.831246 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:36:50.831253 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:36:50.831260 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:36:50.831267 kernel: iommu: Default domain type: Translated Sep 9 23:36:50.831277 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:36:50.831284 kernel: efivars: Registered efivars operations Sep 9 23:36:50.831291 kernel: vgaarb: loaded Sep 9 23:36:50.831298 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:36:50.831305 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:36:50.831312 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:36:50.831319 kernel: pnp: PnP ACPI init Sep 9 23:36:50.831395 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:36:50.831407 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:36:50.831414 kernel: NET: Registered PF_INET protocol family Sep 9 23:36:50.831421 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:36:50.831428 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:36:50.831436 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:36:50.831443 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:36:50.831450 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:36:50.831457 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:36:50.831464 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:36:50.831473 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:36:50.831480 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:36:50.831487 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:36:50.831494 kernel: kvm [1]: HYP mode not available Sep 9 23:36:50.831501 kernel: Initialise system trusted keyrings Sep 9 23:36:50.831508 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:36:50.831515 kernel: Key type asymmetric registered Sep 9 23:36:50.831522 kernel: Asymmetric key parser 'x509' registered Sep 9 23:36:50.831529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 23:36:50.831538 kernel: io scheduler mq-deadline registered Sep 9 23:36:50.831545 kernel: io scheduler kyber registered Sep 9 23:36:50.831552 kernel: io scheduler bfq registered Sep 9 23:36:50.831559 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:36:50.831566 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:36:50.831573 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:36:50.831638 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:36:50.831647 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:36:50.831654 kernel: thunder_xcv, ver 1.0 Sep 9 23:36:50.831661 kernel: thunder_bgx, ver 1.0 Sep 9 23:36:50.831670 kernel: nicpf, ver 1.0 Sep 9 23:36:50.831677 kernel: nicvf, ver 1.0 Sep 9 23:36:50.831747 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:36:50.831818 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:36:50 UTC (1757461010) Sep 9 23:36:50.831828 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:36:50.831835 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 23:36:50.831842 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 23:36:50.831852 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:36:50.831859 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:36:50.831866 kernel: Segment Routing with IPv6 Sep 9 23:36:50.831873 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:36:50.831880 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:36:50.831887 kernel: Key type dns_resolver registered Sep 9 23:36:50.831894 kernel: registered taskstats version 1 Sep 9 23:36:50.831901 kernel: Loading compiled-in X.509 certificates Sep 9 23:36:50.831908 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 3c4ba31f0a17c8a368cad32e74fc485e669c1e50' Sep 9 23:36:50.831915 kernel: Key type .fscrypt registered Sep 9 23:36:50.831924 kernel: Key type fscrypt-provisioning registered Sep 9 23:36:50.831931 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:36:50.831938 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:36:50.831945 kernel: ima: No architecture policies found Sep 9 23:36:50.831952 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:36:50.831959 kernel: clk: Disabling unused clocks Sep 9 23:36:50.831966 kernel: Freeing unused kernel memory: 38400K Sep 9 23:36:50.831973 kernel: Run /init as init process Sep 9 23:36:50.831981 kernel: with arguments: Sep 9 23:36:50.831988 kernel: /init Sep 9 23:36:50.831995 kernel: with environment: Sep 9 23:36:50.832002 kernel: HOME=/ Sep 9 23:36:50.832009 kernel: TERM=linux Sep 9 23:36:50.832015 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:36:50.832023 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:36:50.832033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:36:50.832042 systemd[1]: Detected virtualization kvm. Sep 9 23:36:50.832050 systemd[1]: Detected architecture arm64. Sep 9 23:36:50.832057 systemd[1]: Running in initrd. Sep 9 23:36:50.832065 systemd[1]: No hostname configured, using default hostname. Sep 9 23:36:50.832072 systemd[1]: Hostname set to . Sep 9 23:36:50.832080 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:36:50.832105 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:36:50.832113 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:36:50.832123 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:36:50.832131 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:36:50.832139 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:36:50.832147 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:36:50.832155 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:36:50.832164 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:36:50.832172 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:36:50.832181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:36:50.832189 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:36:50.832197 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:36:50.832204 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:36:50.832212 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:36:50.832219 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:36:50.832227 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:36:50.832235 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:36:50.832242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:36:50.832252 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:36:50.832259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:36:50.832267 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:36:50.832275 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:36:50.832282 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:36:50.832290 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:36:50.832297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:36:50.832305 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:36:50.832314 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:36:50.832322 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:36:50.832329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:36:50.832337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:36:50.832345 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:36:50.832352 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:36:50.832362 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:36:50.832369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:36:50.832395 systemd-journald[238]: Collecting audit messages is disabled. Sep 9 23:36:50.832416 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:36:50.832424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:36:50.832433 systemd-journald[238]: Journal started Sep 9 23:36:50.832450 systemd-journald[238]: Runtime Journal (/run/log/journal/e4003da0e77840e6babf7ac31027f5a4) is 5.9M, max 47.3M, 41.4M free. Sep 9 23:36:50.824845 systemd-modules-load[240]: Inserted module 'overlay' Sep 9 23:36:50.836095 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:36:50.836446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:36:50.839733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:36:50.842409 kernel: Bridge firewalling registered Sep 9 23:36:50.840665 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 9 23:36:50.841254 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:36:50.842751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:36:50.845107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:36:50.848683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:36:50.852345 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:36:50.856886 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:36:50.859103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:36:50.860196 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:36:50.875330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:36:50.877388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:36:50.885093 dracut-cmdline[279]: dracut-dracut-053 Sep 9 23:36:50.887503 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=21f768e38d6f559c285ae64c28cbdad2cb8e0d9191080506cf69923230b56ba0 Sep 9 23:36:50.906313 systemd-resolved[281]: Positive Trust Anchors: Sep 9 23:36:50.906333 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:36:50.906365 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:36:50.911313 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 9 23:36:50.913143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:36:50.914047 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:36:50.950115 kernel: SCSI subsystem initialized Sep 9 23:36:50.955102 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:36:50.962111 kernel: iscsi: registered transport (tcp) Sep 9 23:36:50.975103 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:36:50.975132 kernel: QLogic iSCSI HBA Driver Sep 9 23:36:51.018556 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:36:51.032273 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:36:51.047904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:36:51.047970 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:36:51.047981 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 23:36:51.094118 kernel: raid6: neonx8 gen() 15676 MB/s Sep 9 23:36:51.111106 kernel: raid6: neonx4 gen() 15796 MB/s Sep 9 23:36:51.128113 kernel: raid6: neonx2 gen() 13209 MB/s Sep 9 23:36:51.145104 kernel: raid6: neonx1 gen() 10523 MB/s Sep 9 23:36:51.162105 kernel: raid6: int64x8 gen() 6786 MB/s Sep 9 23:36:51.179126 kernel: raid6: int64x4 gen() 7343 MB/s Sep 9 23:36:51.196124 kernel: raid6: int64x2 gen() 6109 MB/s Sep 9 23:36:51.213141 kernel: raid6: int64x1 gen() 5043 MB/s Sep 9 23:36:51.213197 kernel: raid6: using algorithm neonx4 gen() 15796 MB/s Sep 9 23:36:51.230133 kernel: raid6: .... xor() 12431 MB/s, rmw enabled Sep 9 23:36:51.230176 kernel: raid6: using neon recovery algorithm Sep 9 23:36:51.235110 kernel: xor: measuring software checksum speed Sep 9 23:36:51.235144 kernel: 8regs : 21636 MB/sec Sep 9 23:36:51.236120 kernel: 32regs : 19812 MB/sec Sep 9 23:36:51.236148 kernel: arm64_neon : 27993 MB/sec Sep 9 23:36:51.236158 kernel: xor: using function: arm64_neon (27993 MB/sec) Sep 9 23:36:51.284126 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:36:51.294314 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:36:51.304269 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:36:51.317677 systemd-udevd[463]: Using default interface naming scheme 'v255'. Sep 9 23:36:51.321313 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:36:51.325624 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:36:51.337810 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Sep 9 23:36:51.364679 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:36:51.374288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:36:51.415288 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:36:51.426784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:36:51.438345 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:36:51.439624 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:36:51.441055 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:36:51.442809 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:36:51.450313 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:36:51.461884 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:36:51.468101 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:36:51.472904 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:36:51.479466 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:36:51.479518 kernel: GPT:9289727 != 19775487 Sep 9 23:36:51.479535 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:36:51.480513 kernel: GPT:9289727 != 19775487 Sep 9 23:36:51.480542 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:36:51.486363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:36:51.483688 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:36:51.483774 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:36:51.488024 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:36:51.489953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:36:51.490023 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:36:51.493324 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:36:51.502258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:36:51.512635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:36:51.520246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (507) Sep 9 23:36:51.520284 kernel: BTRFS: device fsid 3ddee560-dcea-4f51-a281-f1376972e538 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (520) Sep 9 23:36:51.534565 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:36:51.546309 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:36:51.553578 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:36:51.559548 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:36:51.560501 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:36:51.575248 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:36:51.577322 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:36:51.581918 disk-uuid[552]: Primary Header is updated. Sep 9 23:36:51.581918 disk-uuid[552]: Secondary Entries is updated. Sep 9 23:36:51.581918 disk-uuid[552]: Secondary Header is updated. Sep 9 23:36:51.584630 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:36:51.601401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:36:52.593127 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:36:52.593298 disk-uuid[553]: The operation has completed successfully. Sep 9 23:36:52.618436 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:36:52.618548 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:36:52.654237 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:36:52.656968 sh[574]: Success Sep 9 23:36:52.667111 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 23:36:52.695670 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:36:52.706497 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:36:52.707936 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:36:52.718422 kernel: BTRFS info (device dm-0): first mount of filesystem 3ddee560-dcea-4f51-a281-f1376972e538 Sep 9 23:36:52.718452 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:36:52.718462 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 23:36:52.719234 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:36:52.720303 kernel: BTRFS info (device dm-0): using free space tree Sep 9 23:36:52.723427 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:36:52.724574 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:36:52.734221 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:36:52.735598 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:36:52.749738 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:36:52.749791 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:36:52.749802 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:36:52.752102 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:36:52.756126 kernel: BTRFS info (device vda6): last unmount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:36:52.758472 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:36:52.766264 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:36:52.819635 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:36:52.833283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:36:52.836519 ignition[662]: Ignition 2.20.0 Sep 9 23:36:52.836528 ignition[662]: Stage: fetch-offline Sep 9 23:36:52.836561 ignition[662]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:52.836570 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:52.836724 ignition[662]: parsed url from cmdline: "" Sep 9 23:36:52.836727 ignition[662]: no config URL provided Sep 9 23:36:52.836732 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:36:52.836739 ignition[662]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:36:52.836768 ignition[662]: op(1): [started] loading QEMU firmware config module Sep 9 23:36:52.836772 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:36:52.842197 ignition[662]: op(1): [finished] loading QEMU firmware config module Sep 9 23:36:52.850231 ignition[662]: parsing config with SHA512: 1f96cb2353efa7ef62225a7410e052b0f595a4d1364dc73123330efa4113e63d81acb373163aafb3ff32d113a94a174572f1ccf543bdf35979eec178439eed01 Sep 9 23:36:52.853513 unknown[662]: fetched base config from "system" Sep 9 23:36:52.853523 unknown[662]: fetched user config from "qemu" Sep 9 23:36:52.853789 ignition[662]: fetch-offline: fetch-offline passed Sep 9 23:36:52.853864 ignition[662]: Ignition finished successfully Sep 9 23:36:52.857378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:36:52.860492 systemd-networkd[761]: lo: Link UP Sep 9 23:36:52.860501 systemd-networkd[761]: lo: Gained carrier Sep 9 23:36:52.861348 systemd-networkd[761]: Enumeration completed Sep 9 23:36:52.861446 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:36:52.861731 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:36:52.861734 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:36:52.862823 systemd[1]: Reached target network.target - Network. Sep 9 23:36:52.862847 systemd-networkd[761]: eth0: Link UP Sep 9 23:36:52.862850 systemd-networkd[761]: eth0: Gained carrier Sep 9 23:36:52.862857 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:36:52.863947 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:36:52.873236 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:36:52.885660 ignition[767]: Ignition 2.20.0 Sep 9 23:36:52.885669 ignition[767]: Stage: kargs Sep 9 23:36:52.885856 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:52.885866 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:52.888161 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:36:52.886569 ignition[767]: kargs: kargs passed Sep 9 23:36:52.889658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:36:52.886615 ignition[767]: Ignition finished successfully Sep 9 23:36:52.906276 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:36:52.916269 ignition[777]: Ignition 2.20.0 Sep 9 23:36:52.916280 ignition[777]: Stage: disks Sep 9 23:36:52.916454 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:52.916464 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:52.917262 ignition[777]: disks: disks passed Sep 9 23:36:52.917308 ignition[777]: Ignition finished successfully Sep 9 23:36:52.921158 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:36:52.922132 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:36:52.923401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:36:52.925030 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:36:52.926642 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:36:52.927994 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:36:52.935244 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:36:52.944436 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 23:36:52.947424 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:36:52.957188 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:36:52.996107 kernel: EXT4-fs (vda9): mounted filesystem e3172dee-2277-4905-9eaa-a536ab409f20 r/w with ordered data mode. Quota mode: none. Sep 9 23:36:52.996782 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:36:52.997937 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:36:53.008173 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:36:53.009711 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:36:53.010965 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:36:53.011003 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:36:53.016759 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (799) Sep 9 23:36:53.011027 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:36:53.020313 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:36:53.020332 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:36:53.020343 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:36:53.015419 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:36:53.018470 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:36:53.024103 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:36:53.024623 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:36:53.054337 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:36:53.057577 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:36:53.060595 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:36:53.063359 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:36:53.130422 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:36:53.143194 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:36:53.145474 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:36:53.150100 kernel: BTRFS info (device vda6): last unmount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:36:53.164332 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:36:53.167677 ignition[911]: INFO : Ignition 2.20.0 Sep 9 23:36:53.167677 ignition[911]: INFO : Stage: mount Sep 9 23:36:53.169697 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:53.169697 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:53.169697 ignition[911]: INFO : mount: mount passed Sep 9 23:36:53.169697 ignition[911]: INFO : Ignition finished successfully Sep 9 23:36:53.170142 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:36:53.182181 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:36:53.837306 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:36:53.846269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:36:53.852722 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (924) Sep 9 23:36:53.852761 kernel: BTRFS info (device vda6): first mount of filesystem 191f1648-95e8-4e77-9224-63d1cc235347 Sep 9 23:36:53.852773 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:36:53.854100 kernel: BTRFS info (device vda6): using free space tree Sep 9 23:36:53.856106 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 23:36:53.856970 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:36:53.872109 ignition[941]: INFO : Ignition 2.20.0 Sep 9 23:36:53.872109 ignition[941]: INFO : Stage: files Sep 9 23:36:53.873438 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:53.873438 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:53.873438 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:36:53.876148 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:36:53.876148 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:36:53.876148 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:36:53.876148 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:36:53.876148 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:36:53.875989 unknown[941]: wrote ssh authorized keys file for user: core Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:36:53.881913 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 23:36:54.307415 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 9 23:36:54.714834 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:36:54.714834 ignition[941]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 9 23:36:54.717880 ignition[941]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:36:54.717880 ignition[941]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:36:54.717880 ignition[941]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 9 23:36:54.717880 ignition[941]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:36:54.728783 ignition[941]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:36:54.732081 ignition[941]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:36:54.734248 ignition[941]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:36:54.734248 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:36:54.734248 ignition[941]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:36:54.734248 ignition[941]: INFO : files: files passed Sep 9 23:36:54.734248 ignition[941]: INFO : Ignition finished successfully Sep 9 23:36:54.734635 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:36:54.743271 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:36:54.744879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:36:54.747595 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:36:54.749130 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:36:54.751237 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:36:54.753317 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:36:54.753317 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:36:54.755747 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:36:54.755458 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:36:54.756842 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:36:54.768239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:36:54.787885 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:36:54.787990 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:36:54.789891 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:36:54.791241 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:36:54.792635 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:36:54.800213 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:36:54.811917 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:36:54.814128 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:36:54.821180 systemd-networkd[761]: eth0: Gained IPv6LL Sep 9 23:36:54.825557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:36:54.826570 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:36:54.828122 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:36:54.829764 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:36:54.829873 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:36:54.831856 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:36:54.832730 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:36:54.834226 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:36:54.835735 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:36:54.837173 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:36:54.838863 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:36:54.840442 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:36:54.842044 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:36:54.843558 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:36:54.845115 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:36:54.846454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:36:54.846565 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:36:54.848471 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:36:54.849356 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:36:54.850886 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:36:54.850984 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:36:54.852510 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:36:54.852612 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:36:54.854779 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:36:54.854896 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:36:54.856520 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:36:54.858032 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:36:54.862155 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:36:54.863365 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:36:54.865022 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:36:54.866385 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:36:54.866472 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:36:54.867640 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:36:54.867715 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:36:54.868946 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:36:54.869058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:36:54.870389 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:36:54.870491 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:36:54.884247 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:36:54.884939 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:36:54.885059 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:36:54.887336 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:36:54.888033 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:36:54.888191 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:36:54.889158 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:36:54.889262 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:36:54.893922 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:36:54.895192 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:36:54.898988 ignition[996]: INFO : Ignition 2.20.0 Sep 9 23:36:54.898988 ignition[996]: INFO : Stage: umount Sep 9 23:36:54.898988 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:36:54.898988 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:36:54.902234 ignition[996]: INFO : umount: umount passed Sep 9 23:36:54.902234 ignition[996]: INFO : Ignition finished successfully Sep 9 23:36:54.901032 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:36:54.903110 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:36:54.905230 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:36:54.905549 systemd[1]: Stopped target network.target - Network. Sep 9 23:36:54.906783 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:36:54.906839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:36:54.908124 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:36:54.908166 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:36:54.909645 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:36:54.909688 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:36:54.910919 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:36:54.910957 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:36:54.912416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:36:54.913824 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:36:54.920531 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:36:54.921258 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:36:54.923988 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:36:54.924247 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:36:54.924362 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:36:54.927866 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:36:54.928457 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:36:54.928510 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:36:54.938203 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:36:54.938902 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:36:54.938959 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:36:54.940577 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:36:54.940620 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:36:54.942950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:36:54.942991 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:36:54.944505 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:36:54.944544 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:36:54.946895 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:36:54.949750 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:36:54.949812 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:36:54.955787 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:36:54.955898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:36:54.968769 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:36:54.968920 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:36:54.970839 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:36:54.970879 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:36:54.971798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:36:54.971829 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:36:54.972636 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:36:54.972677 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:36:54.974921 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:36:54.974964 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:36:54.976820 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:36:54.976866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:36:54.990270 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:36:54.992072 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:36:54.992141 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:36:54.994546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:36:54.994594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:36:54.997354 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:36:54.997405 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:36:54.997672 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:36:54.997764 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:36:54.999670 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:36:54.999767 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:36:55.001265 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:36:55.002411 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:36:55.002478 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:36:55.004662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:36:55.014233 systemd[1]: Switching root. Sep 9 23:36:55.040928 systemd-journald[238]: Journal stopped Sep 9 23:36:55.744977 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Sep 9 23:36:55.745030 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:36:55.745046 kernel: SELinux: policy capability open_perms=1 Sep 9 23:36:55.745056 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:36:55.745069 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:36:55.745078 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:36:55.745109 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:36:55.745121 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:36:55.745131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:36:55.745141 kernel: audit: type=1403 audit(1757461015.169:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:36:55.745175 systemd[1]: Successfully loaded SELinux policy in 31.644ms. Sep 9 23:36:55.745195 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.864ms. Sep 9 23:36:55.745206 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:36:55.745217 systemd[1]: Detected virtualization kvm. Sep 9 23:36:55.745229 systemd[1]: Detected architecture arm64. Sep 9 23:36:55.745239 systemd[1]: Detected first boot. Sep 9 23:36:55.745250 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:36:55.745260 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:36:55.745269 zram_generator::config[1042]: No configuration found. Sep 9 23:36:55.745281 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:36:55.745292 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:36:55.745302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:36:55.745312 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:36:55.745322 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:36:55.745334 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:36:55.745344 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:36:55.745354 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:36:55.745365 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:36:55.745376 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:36:55.745386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:36:55.745396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:36:55.745406 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:36:55.745418 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:36:55.745429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:36:55.745439 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:36:55.745449 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:36:55.745460 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:36:55.745470 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:36:55.745480 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:36:55.745490 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:36:55.745500 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:36:55.745512 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:36:55.745522 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:36:55.745532 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:36:55.745542 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:36:55.745553 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:36:55.745563 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:36:55.745574 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:36:55.745583 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:36:55.745595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:36:55.745605 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:36:55.745615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:36:55.745625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:36:55.745636 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:36:55.745646 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:36:55.745658 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:36:55.745668 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:36:55.745678 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:36:55.745690 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:36:55.745700 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:36:55.745710 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:36:55.745721 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:36:55.745732 systemd[1]: Reached target machines.target - Containers. Sep 9 23:36:55.745750 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:36:55.745762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:36:55.745773 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:36:55.745783 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:36:55.745796 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:36:55.745806 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:36:55.745816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:36:55.745828 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:36:55.745838 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:36:55.745848 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:36:55.745858 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:36:55.745869 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:36:55.745881 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:36:55.745891 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:36:55.745902 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:36:55.745912 kernel: fuse: init (API version 7.39) Sep 9 23:36:55.745922 kernel: loop: module loaded Sep 9 23:36:55.745931 kernel: ACPI: bus type drm_connector registered Sep 9 23:36:55.745940 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:36:55.745955 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:36:55.745967 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:36:55.745980 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:36:55.745990 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:36:55.746021 systemd-journald[1117]: Collecting audit messages is disabled. Sep 9 23:36:55.746044 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:36:55.746055 systemd-journald[1117]: Journal started Sep 9 23:36:55.746076 systemd-journald[1117]: Runtime Journal (/run/log/journal/e4003da0e77840e6babf7ac31027f5a4) is 5.9M, max 47.3M, 41.4M free. Sep 9 23:36:55.558335 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:36:55.569186 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:36:55.569565 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:36:55.748277 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:36:55.748315 systemd[1]: Stopped verity-setup.service. Sep 9 23:36:55.752101 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:36:55.753532 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:36:55.754476 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:36:55.755492 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:36:55.756395 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:36:55.757365 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:36:55.758305 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:36:55.760151 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:36:55.761423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:36:55.762677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:36:55.762856 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:36:55.764119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:36:55.764284 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:36:55.765422 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:36:55.765598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:36:55.766725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:36:55.766914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:36:55.768246 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:36:55.768413 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:36:55.769589 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:36:55.769792 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:36:55.770962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:36:55.772479 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:36:55.773731 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:36:55.775104 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:36:55.787433 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:36:55.799238 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:36:55.801304 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:36:55.802165 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:36:55.802209 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:36:55.803898 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:36:55.806113 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:36:55.807969 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:36:55.808966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:36:55.810153 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:36:55.811853 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:36:55.812950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:36:55.813859 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:36:55.814896 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:36:55.816914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:36:55.821923 systemd-journald[1117]: Time spent on flushing to /var/log/journal/e4003da0e77840e6babf7ac31027f5a4 is 11.195ms for 849 entries. Sep 9 23:36:55.821923 systemd-journald[1117]: System Journal (/var/log/journal/e4003da0e77840e6babf7ac31027f5a4) is 8M, max 195.6M, 187.6M free. Sep 9 23:36:55.849399 systemd-journald[1117]: Received client request to flush runtime journal. Sep 9 23:36:55.849610 kernel: loop0: detected capacity change from 0 to 123192 Sep 9 23:36:55.824303 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:36:55.826506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:36:55.832132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:36:55.833255 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:36:55.834206 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:36:55.839409 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:36:55.843065 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:36:55.844452 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:36:55.850496 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:36:55.859363 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:36:55.864336 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 23:36:55.866919 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:36:55.876134 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:36:55.881832 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 23:36:55.891689 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:36:55.902300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:36:55.904146 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:36:55.906127 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:36:55.908110 kernel: loop1: detected capacity change from 0 to 113512 Sep 9 23:36:55.927742 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Sep 9 23:36:55.927762 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Sep 9 23:36:55.932318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:36:55.936104 kernel: loop2: detected capacity change from 0 to 211168 Sep 9 23:36:55.980120 kernel: loop3: detected capacity change from 0 to 123192 Sep 9 23:36:55.991126 kernel: loop4: detected capacity change from 0 to 113512 Sep 9 23:36:56.001136 kernel: loop5: detected capacity change from 0 to 211168 Sep 9 23:36:56.014744 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:36:56.015199 (sd-merge)[1184]: Merged extensions into '/usr'. Sep 9 23:36:56.019435 systemd[1]: Reload requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:36:56.019452 systemd[1]: Reloading... Sep 9 23:36:56.074115 zram_generator::config[1215]: No configuration found. Sep 9 23:36:56.175161 ldconfig[1154]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:36:56.179396 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:36:56.228946 systemd[1]: Reloading finished in 209 ms. Sep 9 23:36:56.245849 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:36:56.247126 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:36:56.259535 systemd[1]: Starting ensure-sysext.service... Sep 9 23:36:56.261258 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:36:56.276940 systemd[1]: Reload requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:36:56.276962 systemd[1]: Reloading... Sep 9 23:36:56.280121 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:36:56.280324 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:36:56.280957 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:36:56.281169 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 9 23:36:56.281216 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 9 23:36:56.284215 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:36:56.284317 systemd-tmpfiles[1247]: Skipping /boot Sep 9 23:36:56.293128 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:36:56.293267 systemd-tmpfiles[1247]: Skipping /boot Sep 9 23:36:56.328150 zram_generator::config[1276]: No configuration found. Sep 9 23:36:56.410443 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:36:56.459881 systemd[1]: Reloading finished in 182 ms. Sep 9 23:36:56.472994 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:36:56.489425 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:36:56.497004 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:36:56.499881 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:36:56.502080 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:36:56.506406 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:36:56.510334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:36:56.515408 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:36:56.520279 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:36:56.521934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:36:56.524649 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:36:56.531161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:36:56.532167 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:36:56.532278 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:36:56.534312 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:36:56.537080 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:36:56.537817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:36:56.541549 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:36:56.541705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:36:56.543373 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:36:56.543524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:36:56.550251 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 9 23:36:56.552731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:36:56.558403 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:36:56.560809 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:36:56.563379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:36:56.567307 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:36:56.567484 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:36:56.568841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:36:56.572356 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:36:56.574145 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:36:56.574323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:36:56.575431 augenrules[1347]: No rules Sep 9 23:36:56.575915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:36:56.576097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:36:56.577896 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:36:56.584457 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:36:56.584648 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:36:56.586057 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:36:56.587539 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:36:56.587688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:36:56.593911 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:36:56.611180 systemd[1]: Finished ensure-sysext.service. Sep 9 23:36:56.626301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:36:56.627194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:36:56.630123 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:36:56.632264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:36:56.636259 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:36:56.640287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:36:56.641342 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:36:56.641388 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:36:56.643297 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:36:56.647664 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:36:56.660284 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:36:56.661776 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:36:56.662387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:36:56.664243 augenrules[1386]: /sbin/augenrules: No change Sep 9 23:36:56.664459 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:36:56.665573 systemd-resolved[1315]: Positive Trust Anchors: Sep 9 23:36:56.665604 systemd-resolved[1315]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:36:56.665637 systemd-resolved[1315]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:36:56.666626 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:36:56.666808 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:36:56.667994 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:36:56.668159 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:36:56.669796 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:36:56.669942 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:36:56.673695 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:36:56.677937 augenrules[1414]: No rules Sep 9 23:36:56.678932 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:36:56.679171 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:36:56.686977 systemd-resolved[1315]: Defaulting to hostname 'linux'. Sep 9 23:36:56.688056 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:36:56.689722 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:36:56.689797 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:36:56.691611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:36:56.692137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1369) Sep 9 23:36:56.692856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:36:56.740650 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:36:56.747312 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:36:56.748262 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:36:56.748372 systemd-networkd[1394]: lo: Link UP Sep 9 23:36:56.748376 systemd-networkd[1394]: lo: Gained carrier Sep 9 23:36:56.749263 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:36:56.749265 systemd-networkd[1394]: Enumeration completed Sep 9 23:36:56.749690 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:36:56.749694 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:36:56.750186 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:36:56.750714 systemd-networkd[1394]: eth0: Link UP Sep 9 23:36:56.750718 systemd-networkd[1394]: eth0: Gained carrier Sep 9 23:36:56.750740 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:36:56.751456 systemd[1]: Reached target network.target - Network. Sep 9 23:36:56.754155 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:36:56.757277 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:36:56.763953 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:36:56.770374 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:36:56.774226 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.80/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:36:56.777412 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:36:56.778043 systemd-timesyncd[1398]: Network configuration changed, trying to establish connection. Sep 9 23:36:56.778960 systemd-timesyncd[1398]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:36:56.779004 systemd-timesyncd[1398]: Initial clock synchronization to Tue 2025-09-09 23:36:56.525527 UTC. Sep 9 23:36:56.790354 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 23:36:56.805322 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 23:36:56.806800 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:36:56.816248 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 23:36:56.864632 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 23:36:56.865906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:36:56.866852 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:36:56.867777 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:36:56.868784 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:36:56.869967 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:36:56.870957 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:36:56.871984 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:36:56.872987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:36:56.873023 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:36:56.873761 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:36:56.875609 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:36:56.877756 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:36:56.880633 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:36:56.881778 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:36:56.882819 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:36:56.885570 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:36:56.887037 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:36:56.889108 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 23:36:56.890445 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:36:56.891385 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:36:56.892122 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:36:56.892803 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:36:56.892831 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:36:56.893706 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:36:56.895569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:36:56.898237 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 23:36:56.898209 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:36:56.902224 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:36:56.903658 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:36:56.907289 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:36:56.908038 jq[1451]: false Sep 9 23:36:56.910311 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:36:56.914897 extend-filesystems[1452]: Found loop3 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found loop4 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found loop5 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda1 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda2 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda3 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found usr Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda4 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda6 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda7 Sep 9 23:36:56.918211 extend-filesystems[1452]: Found vda9 Sep 9 23:36:56.918211 extend-filesystems[1452]: Checking size of /dev/vda9 Sep 9 23:36:56.915356 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:36:56.929264 extend-filesystems[1452]: Resized partition /dev/vda9 Sep 9 23:36:56.924611 dbus-daemon[1450]: [system] SELinux support is enabled Sep 9 23:36:56.927642 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:36:56.932705 extend-filesystems[1467]: resize2fs 1.47.1 (20-May-2024) Sep 9 23:36:56.935879 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:36:56.930775 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:36:56.931302 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:36:56.933451 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:36:56.936142 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:36:56.939901 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:36:56.947218 jq[1470]: true Sep 9 23:36:56.947577 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 23:36:56.953907 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:36:56.954193 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1353) Sep 9 23:36:56.956136 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:36:56.956431 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:36:56.956604 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:36:56.958162 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:36:56.958329 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:36:56.971821 update_engine[1469]: I20250909 23:36:56.970242 1469 main.cc:92] Flatcar Update Engine starting Sep 9 23:36:56.975208 update_engine[1469]: I20250909 23:36:56.975158 1469 update_check_scheduler.cc:74] Next update check in 8m30s Sep 9 23:36:56.978112 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:36:56.984426 jq[1475]: true Sep 9 23:36:56.989461 (ntainerd)[1483]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:36:56.998109 extend-filesystems[1467]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:36:56.998109 extend-filesystems[1467]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:36:56.998109 extend-filesystems[1467]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:36:56.997742 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:36:57.003225 extend-filesystems[1452]: Resized filesystem in /dev/vda9 Sep 9 23:36:56.999177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:36:57.006898 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:36:57.008250 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:36:57.008280 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:36:57.009962 systemd-logind[1460]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:36:57.009963 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:36:57.010205 systemd-logind[1460]: New seat seat0. Sep 9 23:36:57.011427 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:36:57.023319 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:36:57.024382 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:36:57.028667 bash[1502]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:36:57.034132 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:36:57.036183 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:36:57.055516 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:36:57.139668 containerd[1483]: time="2025-09-09T23:36:57.139585106Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 9 23:36:57.164715 containerd[1483]: time="2025-09-09T23:36:57.164659431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166168980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166204266Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166222432Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166403316Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166421598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166475127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166486592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166675881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166691336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166704815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167410 containerd[1483]: time="2025-09-09T23:36:57.166713840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.166784218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.166964831Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.167113992Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.167129292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.167209508Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 23:36:57.167665 containerd[1483]: time="2025-09-09T23:36:57.167250798Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:36:57.173984 containerd[1483]: time="2025-09-09T23:36:57.173928828Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 23:36:57.174150 containerd[1483]: time="2025-09-09T23:36:57.174131983Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 23:36:57.174243 containerd[1483]: time="2025-09-09T23:36:57.174228080Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 23:36:57.174300 containerd[1483]: time="2025-09-09T23:36:57.174289240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 23:36:57.174351 containerd[1483]: time="2025-09-09T23:36:57.174340252Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 23:36:57.174547 containerd[1483]: time="2025-09-09T23:36:57.174525551Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 23:36:57.174896 containerd[1483]: time="2025-09-09T23:36:57.174865551Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 23:36:57.175073 containerd[1483]: time="2025-09-09T23:36:57.175021995Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 23:36:57.175073 containerd[1483]: time="2025-09-09T23:36:57.175059798Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 23:36:57.175134 containerd[1483]: time="2025-09-09T23:36:57.175095123Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 23:36:57.175134 containerd[1483]: time="2025-09-09T23:36:57.175110926Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175134 containerd[1483]: time="2025-09-09T23:36:57.175124948Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175137110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175151364Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175166857Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175179329Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175190872Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175204 containerd[1483]: time="2025-09-09T23:36:57.175209463Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175239985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175253619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175266634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175278990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175290610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175303 containerd[1483]: time="2025-09-09T23:36:57.175301803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175312997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175326012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175352273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175368037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175379115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175399 containerd[1483]: time="2025-09-09T23:36:57.175390386Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175493 containerd[1483]: time="2025-09-09T23:36:57.175402045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175493 containerd[1483]: time="2025-09-09T23:36:57.175417345Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 23:36:57.175493 containerd[1483]: time="2025-09-09T23:36:57.175438338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175493 containerd[1483]: time="2025-09-09T23:36:57.175451895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175493 containerd[1483]: time="2025-09-09T23:36:57.175463708Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 23:36:57.175651 containerd[1483]: time="2025-09-09T23:36:57.175637001Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 23:36:57.175676 containerd[1483]: time="2025-09-09T23:36:57.175657878Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 23:36:57.175676 containerd[1483]: time="2025-09-09T23:36:57.175667329Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 23:36:57.175718 containerd[1483]: time="2025-09-09T23:36:57.175677748Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 23:36:57.175718 containerd[1483]: time="2025-09-09T23:36:57.175686773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.175718 containerd[1483]: time="2025-09-09T23:36:57.175699245Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 23:36:57.175718 containerd[1483]: time="2025-09-09T23:36:57.175708076Z" level=info msg="NRI interface is disabled by configuration." Sep 9 23:36:57.175718 containerd[1483]: time="2025-09-09T23:36:57.175717062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 23:36:57.176024 containerd[1483]: time="2025-09-09T23:36:57.175968634Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 23:36:57.176024 containerd[1483]: time="2025-09-09T23:36:57.176027354Z" level=info msg="Connect containerd service" Sep 9 23:36:57.176186 containerd[1483]: time="2025-09-09T23:36:57.176071006Z" level=info msg="using legacy CRI server" Sep 9 23:36:57.176186 containerd[1483]: time="2025-09-09T23:36:57.176078636Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:36:57.176340 containerd[1483]: time="2025-09-09T23:36:57.176323043Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 23:36:57.177062 containerd[1483]: time="2025-09-09T23:36:57.177023068Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177356019Z" level=info msg="Start subscribing containerd event" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177410555Z" level=info msg="Start recovering state" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177476169Z" level=info msg="Start event monitor" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177486278Z" level=info msg="Start snapshots syncer" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177495846Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177508976Z" level=info msg="Start streaming server" Sep 9 23:36:57.177746 containerd[1483]: time="2025-09-09T23:36:57.177720111Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:36:57.177911 containerd[1483]: time="2025-09-09T23:36:57.177760819Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:36:57.177904 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:36:57.179156 containerd[1483]: time="2025-09-09T23:36:57.179121400Z" level=info msg="containerd successfully booted in 0.040729s" Sep 9 23:36:57.653225 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:36:57.671707 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:36:57.686417 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:36:57.691661 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:36:57.691876 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:36:57.694602 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:36:57.705561 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:36:57.708363 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:36:57.710498 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:36:57.711984 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:36:57.828754 systemd-networkd[1394]: eth0: Gained IPv6LL Sep 9 23:36:57.834669 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:36:57.839887 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:36:57.856010 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:36:57.861252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:36:57.863393 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:36:57.881649 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:36:57.881993 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:36:57.884551 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:36:57.887458 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:36:58.432533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:36:58.436323 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:36:58.437207 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:36:58.439482 systemd[1]: Startup finished in 522ms (kernel) + 4.497s (initrd) + 3.302s (userspace) = 8.322s. Sep 9 23:36:58.867184 kubelet[1559]: E0909 23:36:58.867133 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:36:58.869782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:36:58.869920 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:36:58.870216 systemd[1]: kubelet.service: Consumed 767ms CPU time, 261.8M memory peak. Sep 9 23:37:03.007521 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:37:03.008682 systemd[1]: Started sshd@0-10.0.0.80:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). Sep 9 23:37:03.065594 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.067804 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.079641 systemd-logind[1460]: New session 1 of user core. Sep 9 23:37:03.080684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:37:03.095374 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:37:03.105140 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:37:03.107151 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:37:03.113405 (systemd)[1576]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:37:03.115747 systemd-logind[1460]: New session c1 of user core. Sep 9 23:37:03.215066 systemd[1576]: Queued start job for default target default.target. Sep 9 23:37:03.226177 systemd[1576]: Created slice app.slice - User Application Slice. Sep 9 23:37:03.226207 systemd[1576]: Reached target paths.target - Paths. Sep 9 23:37:03.226240 systemd[1576]: Reached target timers.target - Timers. Sep 9 23:37:03.227521 systemd[1576]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:37:03.237845 systemd[1576]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:37:03.237913 systemd[1576]: Reached target sockets.target - Sockets. Sep 9 23:37:03.237950 systemd[1576]: Reached target basic.target - Basic System. Sep 9 23:37:03.237977 systemd[1576]: Reached target default.target - Main User Target. Sep 9 23:37:03.238003 systemd[1576]: Startup finished in 116ms. Sep 9 23:37:03.238203 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:37:03.239565 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:37:03.299276 systemd[1]: Started sshd@1-10.0.0.80:22-10.0.0.1:49088.service - OpenSSH per-connection server daemon (10.0.0.1:49088). Sep 9 23:37:03.341009 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 49088 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.342410 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.346163 systemd-logind[1460]: New session 2 of user core. Sep 9 23:37:03.355245 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:37:03.405733 sshd[1589]: Connection closed by 10.0.0.1 port 49088 Sep 9 23:37:03.406204 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:03.415010 systemd[1]: sshd@1-10.0.0.80:22-10.0.0.1:49088.service: Deactivated successfully. Sep 9 23:37:03.416392 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:37:03.417037 systemd-logind[1460]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:37:03.418746 systemd[1]: Started sshd@2-10.0.0.80:22-10.0.0.1:49096.service - OpenSSH per-connection server daemon (10.0.0.1:49096). Sep 9 23:37:03.420430 systemd-logind[1460]: Removed session 2. Sep 9 23:37:03.457369 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 49096 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.458597 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.462775 systemd-logind[1460]: New session 3 of user core. Sep 9 23:37:03.477279 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:37:03.525701 sshd[1597]: Connection closed by 10.0.0.1 port 49096 Sep 9 23:37:03.525569 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:03.535768 systemd[1]: sshd@2-10.0.0.80:22-10.0.0.1:49096.service: Deactivated successfully. Sep 9 23:37:03.538135 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:37:03.538859 systemd-logind[1460]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:37:03.549374 systemd[1]: Started sshd@3-10.0.0.80:22-10.0.0.1:49102.service - OpenSSH per-connection server daemon (10.0.0.1:49102). Sep 9 23:37:03.551490 systemd-logind[1460]: Removed session 3. Sep 9 23:37:03.583946 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 49102 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.585064 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.589600 systemd-logind[1460]: New session 4 of user core. Sep 9 23:37:03.597240 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:37:03.647867 sshd[1605]: Connection closed by 10.0.0.1 port 49102 Sep 9 23:37:03.648207 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:03.657520 systemd[1]: sshd@3-10.0.0.80:22-10.0.0.1:49102.service: Deactivated successfully. Sep 9 23:37:03.660581 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:37:03.661370 systemd-logind[1460]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:37:03.670435 systemd[1]: Started sshd@4-10.0.0.80:22-10.0.0.1:49114.service - OpenSSH per-connection server daemon (10.0.0.1:49114). Sep 9 23:37:03.671473 systemd-logind[1460]: Removed session 4. Sep 9 23:37:03.705714 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 49114 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.706838 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.710980 systemd-logind[1460]: New session 5 of user core. Sep 9 23:37:03.717230 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:37:03.775815 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:37:03.776121 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:03.794971 sudo[1615]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:03.796532 sshd[1614]: Connection closed by 10.0.0.1 port 49114 Sep 9 23:37:03.797035 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:03.816421 systemd[1]: Started sshd@5-10.0.0.80:22-10.0.0.1:49118.service - OpenSSH per-connection server daemon (10.0.0.1:49118). Sep 9 23:37:03.816818 systemd[1]: sshd@4-10.0.0.80:22-10.0.0.1:49114.service: Deactivated successfully. Sep 9 23:37:03.819349 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:37:03.821134 systemd-logind[1460]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:37:03.822294 systemd-logind[1460]: Removed session 5. Sep 9 23:37:03.855574 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 49118 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:03.856868 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:03.863137 systemd-logind[1460]: New session 6 of user core. Sep 9 23:37:03.876271 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:37:03.927396 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:37:03.927673 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:03.931261 sudo[1625]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:03.936772 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:37:03.937039 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:03.951404 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:37:03.975926 augenrules[1647]: No rules Sep 9 23:37:03.977169 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:37:03.977377 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:37:03.980817 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:03.983153 sshd[1623]: Connection closed by 10.0.0.1 port 49118 Sep 9 23:37:03.983587 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:03.993105 systemd[1]: sshd@5-10.0.0.80:22-10.0.0.1:49118.service: Deactivated successfully. Sep 9 23:37:03.994454 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:37:03.995000 systemd-logind[1460]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:37:04.007521 systemd[1]: Started sshd@6-10.0.0.80:22-10.0.0.1:49124.service - OpenSSH per-connection server daemon (10.0.0.1:49124). Sep 9 23:37:04.008620 systemd-logind[1460]: Removed session 6. Sep 9 23:37:04.042469 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 49124 ssh2: RSA SHA256:JTwbHKgnxC/1WG4HHOKqnbtsiVhIIcUc9S0pdkPDSJk Sep 9 23:37:04.043733 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:37:04.047502 systemd-logind[1460]: New session 7 of user core. Sep 9 23:37:04.062283 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:37:04.112302 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:37:04.112569 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:37:04.130374 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:37:04.144245 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:37:04.145143 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:37:04.528331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:04.528599 systemd[1]: kubelet.service: Consumed 767ms CPU time, 261.8M memory peak. Sep 9 23:37:04.545335 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:04.568207 systemd[1]: Reload requested from client PID 1703 ('systemctl') (unit session-7.scope)... Sep 9 23:37:04.568326 systemd[1]: Reloading... Sep 9 23:37:04.643739 zram_generator::config[1746]: No configuration found. Sep 9 23:37:04.765417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 23:37:04.838296 systemd[1]: Reloading finished in 269 ms. Sep 9 23:37:04.895204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:04.897477 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:04.899052 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 23:37:04.899281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:04.899321 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. Sep 9 23:37:04.901117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:37:04.999439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:37:05.003106 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 23:37:05.034356 kubelet[1793]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:05.034356 kubelet[1793]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 23:37:05.034356 kubelet[1793]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 23:37:05.034658 kubelet[1793]: I0909 23:37:05.034394 1793 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 23:37:05.528754 kubelet[1793]: I0909 23:37:05.528702 1793 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 23:37:05.528754 kubelet[1793]: I0909 23:37:05.528742 1793 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 23:37:05.530670 kubelet[1793]: I0909 23:37:05.529242 1793 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 23:37:05.550695 kubelet[1793]: I0909 23:37:05.550548 1793 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 23:37:05.558107 kubelet[1793]: E0909 23:37:05.558048 1793 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 23:37:05.558107 kubelet[1793]: I0909 23:37:05.558110 1793 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 23:37:05.561918 kubelet[1793]: I0909 23:37:05.561885 1793 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 23:37:05.562261 kubelet[1793]: I0909 23:37:05.562225 1793 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 23:37:05.562400 kubelet[1793]: I0909 23:37:05.562254 1793 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.80","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 23:37:05.562497 kubelet[1793]: I0909 23:37:05.562473 1793 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 23:37:05.562497 kubelet[1793]: I0909 23:37:05.562483 1793 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 23:37:05.562970 kubelet[1793]: I0909 23:37:05.562687 1793 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:05.565366 kubelet[1793]: I0909 23:37:05.565340 1793 kubelet.go:480] "Attempting to sync node with API server" Sep 9 23:37:05.565366 kubelet[1793]: I0909 23:37:05.565366 1793 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 23:37:05.565444 kubelet[1793]: I0909 23:37:05.565388 1793 kubelet.go:386] "Adding apiserver pod source" Sep 9 23:37:05.566402 kubelet[1793]: I0909 23:37:05.566382 1793 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 23:37:05.566466 kubelet[1793]: E0909 23:37:05.566392 1793 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:05.566914 kubelet[1793]: E0909 23:37:05.566893 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:05.567435 kubelet[1793]: I0909 23:37:05.567413 1793 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 9 23:37:05.568579 kubelet[1793]: I0909 23:37:05.568239 1793 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 23:37:05.568579 kubelet[1793]: W0909 23:37:05.568375 1793 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 23:37:05.571362 kubelet[1793]: I0909 23:37:05.571340 1793 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 23:37:05.571415 kubelet[1793]: I0909 23:37:05.571391 1793 server.go:1289] "Started kubelet" Sep 9 23:37:05.572308 kubelet[1793]: I0909 23:37:05.571503 1793 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 23:37:05.572308 kubelet[1793]: I0909 23:37:05.571809 1793 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 23:37:05.572308 kubelet[1793]: I0909 23:37:05.571854 1793 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 23:37:05.575893 kubelet[1793]: I0909 23:37:05.574388 1793 server.go:317] "Adding debug handlers to kubelet server" Sep 9 23:37:05.575893 kubelet[1793]: I0909 23:37:05.574724 1793 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 23:37:05.579116 kubelet[1793]: I0909 23:37:05.576467 1793 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 23:37:05.579116 kubelet[1793]: E0909 23:37:05.576708 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:05.579116 kubelet[1793]: I0909 23:37:05.576732 1793 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 23:37:05.579116 kubelet[1793]: I0909 23:37:05.576942 1793 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 23:37:05.579116 kubelet[1793]: I0909 23:37:05.576996 1793 reconciler.go:26] "Reconciler: start to sync state" Sep 9 23:37:05.579886 kubelet[1793]: E0909 23:37:05.579841 1793 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.80\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 23:37:05.579967 kubelet[1793]: E0909 23:37:05.579950 1793 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 23:37:05.580997 kubelet[1793]: I0909 23:37:05.580575 1793 factory.go:223] Registration of the systemd container factory successfully Sep 9 23:37:05.580997 kubelet[1793]: I0909 23:37:05.580686 1793 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 23:37:05.583637 kubelet[1793]: E0909 23:37:05.583603 1793 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 23:37:05.584099 kubelet[1793]: I0909 23:37:05.584061 1793 factory.go:223] Registration of the containerd container factory successfully Sep 9 23:37:05.586437 kubelet[1793]: E0909 23:37:05.586394 1793 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 23:37:05.586505 kubelet[1793]: E0909 23:37:05.586445 1793 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.80\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Sep 9 23:37:05.587773 kubelet[1793]: E0909 23:37:05.586585 1793 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.80.1863c17dc7c64d02 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.80,UID:10.0.0.80,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.80,},FirstTimestamp:2025-09-09 23:37:05.571360002 +0000 UTC m=+0.565009350,LastTimestamp:2025-09-09 23:37:05.571360002 +0000 UTC m=+0.565009350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.80,}" Sep 9 23:37:05.590476 kubelet[1793]: E0909 23:37:05.590375 1793 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.80.1863c17dc880dc1a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.80,UID:10.0.0.80,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.80,},FirstTimestamp:2025-09-09 23:37:05.58358633 +0000 UTC m=+0.577235678,LastTimestamp:2025-09-09 23:37:05.58358633 +0000 UTC m=+0.577235678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.80,}" Sep 9 23:37:05.594340 kubelet[1793]: I0909 23:37:05.594320 1793 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 23:37:05.594423 kubelet[1793]: I0909 23:37:05.594411 1793 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 23:37:05.594472 kubelet[1793]: I0909 23:37:05.594465 1793 state_mem.go:36] "Initialized new in-memory state store" Sep 9 23:37:05.671289 kubelet[1793]: I0909 23:37:05.671254 1793 policy_none.go:49] "None policy: Start" Sep 9 23:37:05.671432 kubelet[1793]: I0909 23:37:05.671420 1793 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 23:37:05.671485 kubelet[1793]: I0909 23:37:05.671478 1793 state_mem.go:35] "Initializing new in-memory state store" Sep 9 23:37:05.676908 kubelet[1793]: E0909 23:37:05.676869 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:05.677045 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 23:37:05.688296 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 23:37:05.691601 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 23:37:05.700252 kubelet[1793]: E0909 23:37:05.700019 1793 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 23:37:05.700252 kubelet[1793]: I0909 23:37:05.700247 1793 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 23:37:05.700365 kubelet[1793]: I0909 23:37:05.700259 1793 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 23:37:05.700525 kubelet[1793]: I0909 23:37:05.700503 1793 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 23:37:05.702234 kubelet[1793]: I0909 23:37:05.702202 1793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 23:37:05.702345 kubelet[1793]: E0909 23:37:05.702328 1793 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 23:37:05.702376 kubelet[1793]: E0909 23:37:05.702370 1793 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.80\" not found" Sep 9 23:37:05.704026 kubelet[1793]: I0909 23:37:05.703331 1793 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 23:37:05.704026 kubelet[1793]: I0909 23:37:05.703360 1793 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 23:37:05.704026 kubelet[1793]: I0909 23:37:05.703378 1793 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 23:37:05.704026 kubelet[1793]: I0909 23:37:05.703385 1793 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 23:37:05.704026 kubelet[1793]: E0909 23:37:05.703422 1793 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 23:37:05.792240 kubelet[1793]: E0909 23:37:05.792123 1793 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.80\" not found" node="10.0.0.80" Sep 9 23:37:05.801168 kubelet[1793]: I0909 23:37:05.801143 1793 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.80" Sep 9 23:37:05.811150 kubelet[1793]: I0909 23:37:05.811112 1793 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.80" Sep 9 23:37:05.811150 kubelet[1793]: E0909 23:37:05.811144 1793 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.80\": node \"10.0.0.80\" not found" Sep 9 23:37:05.885804 kubelet[1793]: E0909 23:37:05.885773 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:05.986022 kubelet[1793]: E0909 23:37:05.985987 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:06.027587 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 9 23:37:06.028710 sshd[1658]: Connection closed by 10.0.0.1 port 49124 Sep 9 23:37:06.029033 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 9 23:37:06.031996 systemd[1]: sshd@6-10.0.0.80:22-10.0.0.1:49124.service: Deactivated successfully. Sep 9 23:37:06.033734 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 23:37:06.035148 systemd[1]: session-7.scope: Consumed 402ms CPU time, 72.7M memory peak. Sep 9 23:37:06.036201 systemd-logind[1460]: Session 7 logged out. Waiting for processes to exit. Sep 9 23:37:06.037360 systemd-logind[1460]: Removed session 7. Sep 9 23:37:06.087197 kubelet[1793]: E0909 23:37:06.087071 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:06.187586 kubelet[1793]: E0909 23:37:06.187535 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:06.288008 kubelet[1793]: E0909 23:37:06.287964 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:06.388572 kubelet[1793]: E0909 23:37:06.388536 1793 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.80\" not found" Sep 9 23:37:06.489941 kubelet[1793]: I0909 23:37:06.489865 1793 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 9 23:37:06.490322 containerd[1483]: time="2025-09-09T23:37:06.490234640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 23:37:06.490733 kubelet[1793]: I0909 23:37:06.490711 1793 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 9 23:37:06.531482 kubelet[1793]: I0909 23:37:06.531439 1793 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 9 23:37:06.531761 kubelet[1793]: I0909 23:37:06.531654 1793 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 23:37:06.531761 kubelet[1793]: I0909 23:37:06.531713 1793 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Sep 9 23:37:06.567009 kubelet[1793]: I0909 23:37:06.566966 1793 apiserver.go:52] "Watching apiserver" Sep 9 23:37:06.567334 kubelet[1793]: E0909 23:37:06.567290 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:06.592665 kubelet[1793]: E0909 23:37:06.592525 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:06.601186 systemd[1]: Created slice kubepods-besteffort-pod28f298ba_9efa_4084_9943_ac175300896f.slice - libcontainer container kubepods-besteffort-pod28f298ba_9efa_4084_9943_ac175300896f.slice. Sep 9 23:37:06.623401 systemd[1]: Created slice kubepods-besteffort-podb39210ed_5916_417e_b205_d6abb56d3e84.slice - libcontainer container kubepods-besteffort-podb39210ed_5916_417e_b205_d6abb56d3e84.slice. Sep 9 23:37:06.678356 kubelet[1793]: I0909 23:37:06.678246 1793 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 23:37:06.682476 kubelet[1793]: I0909 23:37:06.682420 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2j8\" (UniqueName: \"kubernetes.io/projected/b39210ed-5916-417e-b205-d6abb56d3e84-kube-api-access-mb2j8\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682476 kubelet[1793]: I0909 23:37:06.682459 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/efe790cf-787d-4a22-b1c4-a59cfa68b55a-kubelet-dir\") pod \"csi-node-driver-nhggd\" (UID: \"efe790cf-787d-4a22-b1c4-a59cfa68b55a\") " pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:06.682476 kubelet[1793]: I0909 23:37:06.682484 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/efe790cf-787d-4a22-b1c4-a59cfa68b55a-registration-dir\") pod \"csi-node-driver-nhggd\" (UID: \"efe790cf-787d-4a22-b1c4-a59cfa68b55a\") " pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:06.682645 kubelet[1793]: I0909 23:37:06.682507 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/efe790cf-787d-4a22-b1c4-a59cfa68b55a-varrun\") pod \"csi-node-driver-nhggd\" (UID: \"efe790cf-787d-4a22-b1c4-a59cfa68b55a\") " pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:06.682645 kubelet[1793]: I0909 23:37:06.682524 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsvd2\" (UniqueName: \"kubernetes.io/projected/efe790cf-787d-4a22-b1c4-a59cfa68b55a-kube-api-access-wsvd2\") pod \"csi-node-driver-nhggd\" (UID: \"efe790cf-787d-4a22-b1c4-a59cfa68b55a\") " pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:06.682645 kubelet[1793]: I0909 23:37:06.682539 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9692v\" (UniqueName: \"kubernetes.io/projected/28f298ba-9efa-4084-9943-ac175300896f-kube-api-access-9692v\") pod \"kube-proxy-9kk74\" (UID: \"28f298ba-9efa-4084-9943-ac175300896f\") " pod="kube-system/kube-proxy-9kk74" Sep 9 23:37:06.682645 kubelet[1793]: I0909 23:37:06.682554 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-cni-bin-dir\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682645 kubelet[1793]: I0909 23:37:06.682569 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-cni-net-dir\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682748 kubelet[1793]: I0909 23:37:06.682588 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b39210ed-5916-417e-b205-d6abb56d3e84-node-certs\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682748 kubelet[1793]: I0909 23:37:06.682604 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/efe790cf-787d-4a22-b1c4-a59cfa68b55a-socket-dir\") pod \"csi-node-driver-nhggd\" (UID: \"efe790cf-787d-4a22-b1c4-a59cfa68b55a\") " pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:06.682748 kubelet[1793]: I0909 23:37:06.682618 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28f298ba-9efa-4084-9943-ac175300896f-xtables-lock\") pod \"kube-proxy-9kk74\" (UID: \"28f298ba-9efa-4084-9943-ac175300896f\") " pod="kube-system/kube-proxy-9kk74" Sep 9 23:37:06.682748 kubelet[1793]: I0909 23:37:06.682631 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28f298ba-9efa-4084-9943-ac175300896f-lib-modules\") pod \"kube-proxy-9kk74\" (UID: \"28f298ba-9efa-4084-9943-ac175300896f\") " pod="kube-system/kube-proxy-9kk74" Sep 9 23:37:06.682748 kubelet[1793]: I0909 23:37:06.682644 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-flexvol-driver-host\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682849 kubelet[1793]: I0909 23:37:06.682671 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-lib-modules\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682849 kubelet[1793]: I0909 23:37:06.682685 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b39210ed-5916-417e-b205-d6abb56d3e84-tigera-ca-bundle\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682849 kubelet[1793]: I0909 23:37:06.682702 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-var-lib-calico\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682849 kubelet[1793]: I0909 23:37:06.682720 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-xtables-lock\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682849 kubelet[1793]: I0909 23:37:06.682735 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-var-run-calico\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682948 kubelet[1793]: I0909 23:37:06.682749 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28f298ba-9efa-4084-9943-ac175300896f-kube-proxy\") pod \"kube-proxy-9kk74\" (UID: \"28f298ba-9efa-4084-9943-ac175300896f\") " pod="kube-system/kube-proxy-9kk74" Sep 9 23:37:06.682948 kubelet[1793]: I0909 23:37:06.682763 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-cni-log-dir\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.682948 kubelet[1793]: I0909 23:37:06.682777 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b39210ed-5916-417e-b205-d6abb56d3e84-policysync\") pod \"calico-node-8tdsn\" (UID: \"b39210ed-5916-417e-b205-d6abb56d3e84\") " pod="calico-system/calico-node-8tdsn" Sep 9 23:37:06.785211 kubelet[1793]: E0909 23:37:06.785175 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.785211 kubelet[1793]: W0909 23:37:06.785201 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.785348 kubelet[1793]: E0909 23:37:06.785221 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.785516 kubelet[1793]: E0909 23:37:06.785496 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.785516 kubelet[1793]: W0909 23:37:06.785508 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.785516 kubelet[1793]: E0909 23:37:06.785518 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.785839 kubelet[1793]: E0909 23:37:06.785821 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.785839 kubelet[1793]: W0909 23:37:06.785833 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.785917 kubelet[1793]: E0909 23:37:06.785842 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.786130 kubelet[1793]: E0909 23:37:06.786107 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.786130 kubelet[1793]: W0909 23:37:06.786125 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.786259 kubelet[1793]: E0909 23:37:06.786149 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.786582 kubelet[1793]: E0909 23:37:06.786374 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.786582 kubelet[1793]: W0909 23:37:06.786387 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.786582 kubelet[1793]: E0909 23:37:06.786396 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.786675 kubelet[1793]: E0909 23:37:06.786623 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.786675 kubelet[1793]: W0909 23:37:06.786634 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.786675 kubelet[1793]: E0909 23:37:06.786644 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.786888 kubelet[1793]: E0909 23:37:06.786849 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.786888 kubelet[1793]: W0909 23:37:06.786878 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.786888 kubelet[1793]: E0909 23:37:06.786888 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.787123 kubelet[1793]: E0909 23:37:06.787105 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.787123 kubelet[1793]: W0909 23:37:06.787118 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.787204 kubelet[1793]: E0909 23:37:06.787127 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.787798 kubelet[1793]: E0909 23:37:06.787312 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.787798 kubelet[1793]: W0909 23:37:06.787324 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.787798 kubelet[1793]: E0909 23:37:06.787333 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.789192 kubelet[1793]: E0909 23:37:06.789165 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.789192 kubelet[1793]: W0909 23:37:06.789185 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.789279 kubelet[1793]: E0909 23:37:06.789202 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.796322 kubelet[1793]: E0909 23:37:06.796294 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.796322 kubelet[1793]: W0909 23:37:06.796316 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.796421 kubelet[1793]: E0909 23:37:06.796334 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.797328 kubelet[1793]: E0909 23:37:06.797306 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.797328 kubelet[1793]: W0909 23:37:06.797321 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.797328 kubelet[1793]: E0909 23:37:06.797333 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.799032 kubelet[1793]: E0909 23:37:06.798976 1793 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 23:37:06.800103 kubelet[1793]: W0909 23:37:06.799098 1793 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 23:37:06.800103 kubelet[1793]: E0909 23:37:06.799151 1793 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 23:37:06.923211 containerd[1483]: time="2025-09-09T23:37:06.923163653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kk74,Uid:28f298ba-9efa-4084-9943-ac175300896f,Namespace:kube-system,Attempt:0,}" Sep 9 23:37:06.926683 containerd[1483]: time="2025-09-09T23:37:06.926645580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8tdsn,Uid:b39210ed-5916-417e-b205-d6abb56d3e84,Namespace:calico-system,Attempt:0,}" Sep 9 23:37:07.438676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826524168.mount: Deactivated successfully. Sep 9 23:37:07.447286 containerd[1483]: time="2025-09-09T23:37:07.447230746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:07.450371 containerd[1483]: time="2025-09-09T23:37:07.450279651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 23:37:07.451121 containerd[1483]: time="2025-09-09T23:37:07.451081990Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:07.452344 containerd[1483]: time="2025-09-09T23:37:07.452156641Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:07.452344 containerd[1483]: time="2025-09-09T23:37:07.452251246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 23:37:07.454633 containerd[1483]: time="2025-09-09T23:37:07.454601897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 23:37:07.457124 containerd[1483]: time="2025-09-09T23:37:07.457080393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 530.353223ms" Sep 9 23:37:07.459668 containerd[1483]: time="2025-09-09T23:37:07.459632393Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.379031ms" Sep 9 23:37:07.542892 containerd[1483]: time="2025-09-09T23:37:07.542678393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:07.542892 containerd[1483]: time="2025-09-09T23:37:07.542745469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:07.542892 containerd[1483]: time="2025-09-09T23:37:07.542756576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:07.542892 containerd[1483]: time="2025-09-09T23:37:07.542838409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:07.544558 containerd[1483]: time="2025-09-09T23:37:07.544302627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:07.544558 containerd[1483]: time="2025-09-09T23:37:07.544354789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:07.544558 containerd[1483]: time="2025-09-09T23:37:07.544370655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:07.544558 containerd[1483]: time="2025-09-09T23:37:07.544436542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:07.567832 kubelet[1793]: E0909 23:37:07.567758 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:07.618271 systemd[1]: Started cri-containerd-3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e.scope - libcontainer container 3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e. Sep 9 23:37:07.621228 systemd[1]: Started cri-containerd-11ddc133d52b0e8d63c577943b9521ca982f63bf67af3b57ffa91a677b549265.scope - libcontainer container 11ddc133d52b0e8d63c577943b9521ca982f63bf67af3b57ffa91a677b549265. Sep 9 23:37:07.641604 containerd[1483]: time="2025-09-09T23:37:07.641559592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8tdsn,Uid:b39210ed-5916-417e-b205-d6abb56d3e84,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\"" Sep 9 23:37:07.643975 containerd[1483]: time="2025-09-09T23:37:07.643945666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 23:37:07.647219 containerd[1483]: time="2025-09-09T23:37:07.647185922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9kk74,Uid:28f298ba-9efa-4084-9943-ac175300896f,Namespace:kube-system,Attempt:0,} returns sandbox id \"11ddc133d52b0e8d63c577943b9521ca982f63bf67af3b57ffa91a677b549265\"" Sep 9 23:37:07.704815 kubelet[1793]: E0909 23:37:07.704390 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:08.508101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651034690.mount: Deactivated successfully. Sep 9 23:37:08.569230 kubelet[1793]: E0909 23:37:08.569190 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:08.592122 containerd[1483]: time="2025-09-09T23:37:08.591849371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:08.592628 containerd[1483]: time="2025-09-09T23:37:08.592575558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 9 23:37:08.593173 containerd[1483]: time="2025-09-09T23:37:08.593144460Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:08.596296 containerd[1483]: time="2025-09-09T23:37:08.596244256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:08.597341 containerd[1483]: time="2025-09-09T23:37:08.597304311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 953.321229ms" Sep 9 23:37:08.597400 containerd[1483]: time="2025-09-09T23:37:08.597350214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 23:37:08.598298 containerd[1483]: time="2025-09-09T23:37:08.598256479Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 23:37:08.603190 containerd[1483]: time="2025-09-09T23:37:08.603150181Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 23:37:08.617279 containerd[1483]: time="2025-09-09T23:37:08.617220096Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a\"" Sep 9 23:37:08.618153 containerd[1483]: time="2025-09-09T23:37:08.618121675Z" level=info msg="StartContainer for \"5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a\"" Sep 9 23:37:08.646265 systemd[1]: Started cri-containerd-5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a.scope - libcontainer container 5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a. Sep 9 23:37:08.675069 containerd[1483]: time="2025-09-09T23:37:08.674925763Z" level=info msg="StartContainer for \"5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a\" returns successfully" Sep 9 23:37:08.689462 systemd[1]: cri-containerd-5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a.scope: Deactivated successfully. Sep 9 23:37:08.735076 containerd[1483]: time="2025-09-09T23:37:08.734887940Z" level=info msg="shim disconnected" id=5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a namespace=k8s.io Sep 9 23:37:08.735076 containerd[1483]: time="2025-09-09T23:37:08.734935550Z" level=warning msg="cleaning up after shim disconnected" id=5eab8e63f120d7d4c2f13174a25ade365221a479a05adfe7360c089bcf6e103a namespace=k8s.io Sep 9 23:37:08.735076 containerd[1483]: time="2025-09-09T23:37:08.734943492Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:37:09.569410 kubelet[1793]: E0909 23:37:09.569358 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:09.610887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488685027.mount: Deactivated successfully. Sep 9 23:37:09.704241 kubelet[1793]: E0909 23:37:09.703888 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:09.866441 containerd[1483]: time="2025-09-09T23:37:09.866388112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:09.867264 containerd[1483]: time="2025-09-09T23:37:09.867222157Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 9 23:37:09.867915 containerd[1483]: time="2025-09-09T23:37:09.867867613Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:09.869934 containerd[1483]: time="2025-09-09T23:37:09.869670718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:09.870675 containerd[1483]: time="2025-09-09T23:37:09.870633576Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.272333887s" Sep 9 23:37:09.870675 containerd[1483]: time="2025-09-09T23:37:09.870667519Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 9 23:37:09.871751 containerd[1483]: time="2025-09-09T23:37:09.871720956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 23:37:09.874235 containerd[1483]: time="2025-09-09T23:37:09.874204015Z" level=info msg="CreateContainer within sandbox \"11ddc133d52b0e8d63c577943b9521ca982f63bf67af3b57ffa91a677b549265\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 23:37:09.887330 containerd[1483]: time="2025-09-09T23:37:09.887284162Z" level=info msg="CreateContainer within sandbox \"11ddc133d52b0e8d63c577943b9521ca982f63bf67af3b57ffa91a677b549265\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b9ccca6c4230d50365afee64a619239e5b24fe0601b7621552c0db619b96c31\"" Sep 9 23:37:09.887999 containerd[1483]: time="2025-09-09T23:37:09.887974888Z" level=info msg="StartContainer for \"0b9ccca6c4230d50365afee64a619239e5b24fe0601b7621552c0db619b96c31\"" Sep 9 23:37:09.914239 systemd[1]: Started cri-containerd-0b9ccca6c4230d50365afee64a619239e5b24fe0601b7621552c0db619b96c31.scope - libcontainer container 0b9ccca6c4230d50365afee64a619239e5b24fe0601b7621552c0db619b96c31. Sep 9 23:37:09.936773 containerd[1483]: time="2025-09-09T23:37:09.936729650Z" level=info msg="StartContainer for \"0b9ccca6c4230d50365afee64a619239e5b24fe0601b7621552c0db619b96c31\" returns successfully" Sep 9 23:37:10.570114 kubelet[1793]: E0909 23:37:10.570066 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:10.733186 kubelet[1793]: I0909 23:37:10.733112 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9kk74" podStartSLOduration=3.510067424 podStartE2EDuration="5.733096071s" podCreationTimestamp="2025-09-09 23:37:05 +0000 UTC" firstStartedPulling="2025-09-09 23:37:07.648569776 +0000 UTC m=+2.642219124" lastFinishedPulling="2025-09-09 23:37:09.871598422 +0000 UTC m=+4.865247771" observedRunningTime="2025-09-09 23:37:10.732216926 +0000 UTC m=+5.725866235" watchObservedRunningTime="2025-09-09 23:37:10.733096071 +0000 UTC m=+5.726745460" Sep 9 23:37:11.570562 kubelet[1793]: E0909 23:37:11.570509 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:11.615511 containerd[1483]: time="2025-09-09T23:37:11.615467224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:11.616228 containerd[1483]: time="2025-09-09T23:37:11.616193857Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 23:37:11.616731 containerd[1483]: time="2025-09-09T23:37:11.616703357Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:11.619525 containerd[1483]: time="2025-09-09T23:37:11.619486179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:11.620425 containerd[1483]: time="2025-09-09T23:37:11.620031782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.748275123s" Sep 9 23:37:11.620425 containerd[1483]: time="2025-09-09T23:37:11.620061914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 23:37:11.623509 containerd[1483]: time="2025-09-09T23:37:11.623480456Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 23:37:11.634832 containerd[1483]: time="2025-09-09T23:37:11.634796121Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1\"" Sep 9 23:37:11.635320 containerd[1483]: time="2025-09-09T23:37:11.635266692Z" level=info msg="StartContainer for \"d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1\"" Sep 9 23:37:11.662280 systemd[1]: Started cri-containerd-d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1.scope - libcontainer container d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1. Sep 9 23:37:11.687776 containerd[1483]: time="2025-09-09T23:37:11.687637665Z" level=info msg="StartContainer for \"d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1\" returns successfully" Sep 9 23:37:11.705470 kubelet[1793]: E0909 23:37:11.705414 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:12.185422 containerd[1483]: time="2025-09-09T23:37:12.185377798Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:37:12.187491 systemd[1]: cri-containerd-d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1.scope: Deactivated successfully. Sep 9 23:37:12.187892 systemd[1]: cri-containerd-d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1.scope: Consumed 437ms CPU time, 187.2M memory peak, 165.8M written to disk. Sep 9 23:37:12.200490 kubelet[1793]: I0909 23:37:12.200187 1793 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 23:37:12.205711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1-rootfs.mount: Deactivated successfully. Sep 9 23:37:12.325338 containerd[1483]: time="2025-09-09T23:37:12.325277670Z" level=info msg="shim disconnected" id=d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1 namespace=k8s.io Sep 9 23:37:12.325338 containerd[1483]: time="2025-09-09T23:37:12.325331878Z" level=warning msg="cleaning up after shim disconnected" id=d4d83e657c513b72cf546c2ab15082e058af54e4cd404a17b6c22c76962d69e1 namespace=k8s.io Sep 9 23:37:12.325338 containerd[1483]: time="2025-09-09T23:37:12.325340202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 23:37:12.571011 kubelet[1793]: E0909 23:37:12.570874 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:12.728513 containerd[1483]: time="2025-09-09T23:37:12.728477062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 23:37:13.571815 kubelet[1793]: E0909 23:37:13.571757 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:13.711668 systemd[1]: Created slice kubepods-besteffort-podefe790cf_787d_4a22_b1c4_a59cfa68b55a.slice - libcontainer container kubepods-besteffort-podefe790cf_787d_4a22_b1c4_a59cfa68b55a.slice. Sep 9 23:37:13.716591 containerd[1483]: time="2025-09-09T23:37:13.716296324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:0,}" Sep 9 23:37:13.794601 containerd[1483]: time="2025-09-09T23:37:13.794446928Z" level=error msg="Failed to destroy network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:13.796774 containerd[1483]: time="2025-09-09T23:37:13.795214367Z" level=error msg="encountered an error cleaning up failed sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:13.796774 containerd[1483]: time="2025-09-09T23:37:13.795279124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:13.796465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249-shm.mount: Deactivated successfully. Sep 9 23:37:13.797568 kubelet[1793]: E0909 23:37:13.796991 1793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:13.797568 kubelet[1793]: E0909 23:37:13.797204 1793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:13.797568 kubelet[1793]: E0909 23:37:13.797228 1793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:13.797645 kubelet[1793]: E0909 23:37:13.797283 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:14.572285 kubelet[1793]: E0909 23:37:14.572232 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:14.736130 kubelet[1793]: I0909 23:37:14.735660 1793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249" Sep 9 23:37:14.736310 containerd[1483]: time="2025-09-09T23:37:14.736270753Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\"" Sep 9 23:37:14.736451 containerd[1483]: time="2025-09-09T23:37:14.736432104Z" level=info msg="Ensure that sandbox 9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249 in task-service has been cleanup successfully" Sep 9 23:37:14.738415 containerd[1483]: time="2025-09-09T23:37:14.736613229Z" level=info msg="TearDown network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" successfully" Sep 9 23:37:14.738415 containerd[1483]: time="2025-09-09T23:37:14.736675425Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" returns successfully" Sep 9 23:37:14.738415 containerd[1483]: time="2025-09-09T23:37:14.738219357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:1,}" Sep 9 23:37:14.737838 systemd[1]: run-netns-cni\x2d357c6ccc\x2d9373\x2d5977\x2d714f\x2d301839578b99.mount: Deactivated successfully. Sep 9 23:37:14.811118 containerd[1483]: time="2025-09-09T23:37:14.808100490Z" level=error msg="Failed to destroy network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:14.809512 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76-shm.mount: Deactivated successfully. Sep 9 23:37:14.811475 containerd[1483]: time="2025-09-09T23:37:14.811271521Z" level=error msg="encountered an error cleaning up failed sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:14.811475 containerd[1483]: time="2025-09-09T23:37:14.811336747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:14.811992 kubelet[1793]: E0909 23:37:14.811582 1793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:14.811992 kubelet[1793]: E0909 23:37:14.811643 1793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:14.811992 kubelet[1793]: E0909 23:37:14.811676 1793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:14.812142 kubelet[1793]: E0909 23:37:14.811719 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:15.573262 kubelet[1793]: E0909 23:37:15.573207 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:15.626697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4068545613.mount: Deactivated successfully. Sep 9 23:37:15.650864 containerd[1483]: time="2025-09-09T23:37:15.650813223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:15.651323 containerd[1483]: time="2025-09-09T23:37:15.651283632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 23:37:15.652175 containerd[1483]: time="2025-09-09T23:37:15.652137540Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:15.654066 containerd[1483]: time="2025-09-09T23:37:15.654038243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:15.654615 containerd[1483]: time="2025-09-09T23:37:15.654578413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 2.926061187s" Sep 9 23:37:15.654615 containerd[1483]: time="2025-09-09T23:37:15.654613392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 23:37:15.662437 containerd[1483]: time="2025-09-09T23:37:15.662403187Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 23:37:15.673662 containerd[1483]: time="2025-09-09T23:37:15.673607898Z" level=info msg="CreateContainer within sandbox \"3ed3aab207d1a5d4d48fae3e7909bdb5a20d80096f6bed680d23740a8422cd7e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f\"" Sep 9 23:37:15.674345 containerd[1483]: time="2025-09-09T23:37:15.674318219Z" level=info msg="StartContainer for \"842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f\"" Sep 9 23:37:15.697232 systemd[1]: Started cri-containerd-842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f.scope - libcontainer container 842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f. Sep 9 23:37:15.724888 containerd[1483]: time="2025-09-09T23:37:15.724837377Z" level=info msg="StartContainer for \"842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f\" returns successfully" Sep 9 23:37:15.741128 kubelet[1793]: I0909 23:37:15.740640 1793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76" Sep 9 23:37:15.741461 containerd[1483]: time="2025-09-09T23:37:15.741414623Z" level=info msg="StopPodSandbox for \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\"" Sep 9 23:37:15.741640 containerd[1483]: time="2025-09-09T23:37:15.741610501Z" level=info msg="Ensure that sandbox b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76 in task-service has been cleanup successfully" Sep 9 23:37:15.742229 containerd[1483]: time="2025-09-09T23:37:15.742042221Z" level=info msg="TearDown network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\" successfully" Sep 9 23:37:15.742229 containerd[1483]: time="2025-09-09T23:37:15.742066192Z" level=info msg="StopPodSandbox for \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\" returns successfully" Sep 9 23:37:15.743073 systemd[1]: run-netns-cni\x2de5e38179\x2d0587\x2ddd4b\x2dcb52\x2d538f2ade5c49.mount: Deactivated successfully. Sep 9 23:37:15.743456 containerd[1483]: time="2025-09-09T23:37:15.743418231Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\"" Sep 9 23:37:15.745571 containerd[1483]: time="2025-09-09T23:37:15.745540099Z" level=info msg="TearDown network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" successfully" Sep 9 23:37:15.747044 containerd[1483]: time="2025-09-09T23:37:15.745657641Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" returns successfully" Sep 9 23:37:15.747044 containerd[1483]: time="2025-09-09T23:37:15.746683695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:2,}" Sep 9 23:37:15.801405 containerd[1483]: time="2025-09-09T23:37:15.801343884Z" level=error msg="Failed to destroy network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:15.802404 containerd[1483]: time="2025-09-09T23:37:15.801677446Z" level=error msg="encountered an error cleaning up failed sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:15.802404 containerd[1483]: time="2025-09-09T23:37:15.801790482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:15.802487 kubelet[1793]: E0909 23:37:15.802003 1793 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 23:37:15.802487 kubelet[1793]: E0909 23:37:15.802061 1793 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:15.802487 kubelet[1793]: E0909 23:37:15.802094 1793 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-nhggd" Sep 9 23:37:15.802570 kubelet[1793]: E0909 23:37:15.802137 1793 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-nhggd_calico-system(efe790cf-787d-4a22-b1c4-a59cfa68b55a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-nhggd" podUID="efe790cf-787d-4a22-b1c4-a59cfa68b55a" Sep 9 23:37:15.803146 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a-shm.mount: Deactivated successfully. Sep 9 23:37:15.846559 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 23:37:15.846697 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 23:37:16.573842 kubelet[1793]: E0909 23:37:16.573786 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:16.751678 kubelet[1793]: I0909 23:37:16.751291 1793 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a" Sep 9 23:37:16.751678 kubelet[1793]: I0909 23:37:16.751359 1793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:37:16.752625 containerd[1483]: time="2025-09-09T23:37:16.752592559Z" level=info msg="StopPodSandbox for \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\"" Sep 9 23:37:16.752930 containerd[1483]: time="2025-09-09T23:37:16.752766921Z" level=info msg="Ensure that sandbox 12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a in task-service has been cleanup successfully" Sep 9 23:37:16.752956 containerd[1483]: time="2025-09-09T23:37:16.752936615Z" level=info msg="TearDown network for sandbox \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\" successfully" Sep 9 23:37:16.752976 containerd[1483]: time="2025-09-09T23:37:16.752951777Z" level=info msg="StopPodSandbox for \"12f65a2edb38a0e661d359529c24f4a07e80f52f9f1b5c25e979b4a9d618224a\" returns successfully" Sep 9 23:37:16.753281 containerd[1483]: time="2025-09-09T23:37:16.753240492Z" level=info msg="StopPodSandbox for \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\"" Sep 9 23:37:16.753454 containerd[1483]: time="2025-09-09T23:37:16.753333738Z" level=info msg="TearDown network for sandbox \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\" successfully" Sep 9 23:37:16.753454 containerd[1483]: time="2025-09-09T23:37:16.753347743Z" level=info msg="StopPodSandbox for \"b86ecc09cb727f56f84e89f9b3eab1715c09b7d72b12f275ae61c293decd4c76\" returns successfully" Sep 9 23:37:16.754128 containerd[1483]: time="2025-09-09T23:37:16.753574932Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\"" Sep 9 23:37:16.754128 containerd[1483]: time="2025-09-09T23:37:16.753638971Z" level=info msg="TearDown network for sandbox \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" successfully" Sep 9 23:37:16.754128 containerd[1483]: time="2025-09-09T23:37:16.753648069Z" level=info msg="StopPodSandbox for \"9360822e2dd86ef63cad06c7a16bf234acaed9fd871e2af777461877ffe20249\" returns successfully" Sep 9 23:37:16.754444 systemd[1]: run-netns-cni\x2d02b445de\x2dd92d\x2d867a\x2dbf1b\x2dd3293667958c.mount: Deactivated successfully. Sep 9 23:37:16.755202 containerd[1483]: time="2025-09-09T23:37:16.754630442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:3,}" Sep 9 23:37:16.894488 systemd-networkd[1394]: cali1792b50847e: Link UP Sep 9 23:37:16.894667 systemd-networkd[1394]: cali1792b50847e: Gained carrier Sep 9 23:37:16.908178 kubelet[1793]: I0909 23:37:16.908124 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8tdsn" podStartSLOduration=3.89629922 podStartE2EDuration="11.908106345s" podCreationTimestamp="2025-09-09 23:37:05 +0000 UTC" firstStartedPulling="2025-09-09 23:37:07.643603897 +0000 UTC m=+2.637253246" lastFinishedPulling="2025-09-09 23:37:15.655411062 +0000 UTC m=+10.649060371" observedRunningTime="2025-09-09 23:37:15.76537208 +0000 UTC m=+10.759021469" watchObservedRunningTime="2025-09-09 23:37:16.908106345 +0000 UTC m=+11.901755654" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.785 [INFO][2429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.803 [INFO][2429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.80-k8s-csi--node--driver--nhggd-eth0 csi-node-driver- calico-system efe790cf-787d-4a22-b1c4-a59cfa68b55a 1144 0 2025-09-09 23:37:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.80 csi-node-driver-nhggd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1792b50847e [] [] }} ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.804 [INFO][2429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.843 [INFO][2444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" HandleID="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Workload="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.844 [INFO][2444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" HandleID="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Workload="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c470), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.80", "pod":"csi-node-driver-nhggd", "timestamp":"2025-09-09 23:37:16.843842368 +0000 UTC"}, Hostname:"10.0.0.80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.844 [INFO][2444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.844 [INFO][2444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.844 [INFO][2444] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.80' Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.854 [INFO][2444] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.860 [INFO][2444] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.867 [INFO][2444] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.871 [INFO][2444] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.874 [INFO][2444] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.874 [INFO][2444] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.876 [INFO][2444] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.879 [INFO][2444] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.886 [INFO][2444] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.1/26] block=192.168.6.0/26 handle="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.886 [INFO][2444] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.1/26] handle="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" host="10.0.0.80" Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.886 [INFO][2444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:37:16.909625 containerd[1483]: 2025-09-09 23:37:16.886 [INFO][2444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.1/26] IPv6=[] ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" HandleID="k8s-pod-network.3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Workload="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.889 [INFO][2429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-csi--node--driver--nhggd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efe790cf-787d-4a22-b1c4-a59cfa68b55a", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"", Pod:"csi-node-driver-nhggd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1792b50847e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.889 [INFO][2429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.1/32] ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.889 [INFO][2429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1792b50847e ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.895 [INFO][2429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.895 [INFO][2429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-csi--node--driver--nhggd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"efe790cf-787d-4a22-b1c4-a59cfa68b55a", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c", Pod:"csi-node-driver-nhggd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1792b50847e", MAC:"ae:f3:06:83:16:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:16.910253 containerd[1483]: 2025-09-09 23:37:16.907 [INFO][2429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c" Namespace="calico-system" Pod="csi-node-driver-nhggd" WorkloadEndpoint="10.0.0.80-k8s-csi--node--driver--nhggd-eth0" Sep 9 23:37:16.923154 containerd[1483]: time="2025-09-09T23:37:16.923034975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:16.923154 containerd[1483]: time="2025-09-09T23:37:16.923107154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:16.923154 containerd[1483]: time="2025-09-09T23:37:16.923119523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:16.923346 containerd[1483]: time="2025-09-09T23:37:16.923190186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:16.944325 systemd[1]: Started cri-containerd-3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c.scope - libcontainer container 3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c. Sep 9 23:37:16.954197 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:37:16.963521 containerd[1483]: time="2025-09-09T23:37:16.963480008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-nhggd,Uid:efe790cf-787d-4a22-b1c4-a59cfa68b55a,Namespace:calico-system,Attempt:3,} returns sandbox id \"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c\"" Sep 9 23:37:16.965234 containerd[1483]: time="2025-09-09T23:37:16.965154243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 23:37:17.261142 kernel: bpftool[2634]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 23:37:17.411800 systemd-networkd[1394]: vxlan.calico: Link UP Sep 9 23:37:17.411808 systemd-networkd[1394]: vxlan.calico: Gained carrier Sep 9 23:37:17.518907 systemd[1]: Created slice kubepods-besteffort-pod62510a5a_e3d1_4606_b730_e8efb75e993c.slice - libcontainer container kubepods-besteffort-pod62510a5a_e3d1_4606_b730_e8efb75e993c.slice. Sep 9 23:37:17.574523 kubelet[1793]: E0909 23:37:17.574460 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:17.646937 kubelet[1793]: I0909 23:37:17.646784 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvfq9\" (UniqueName: \"kubernetes.io/projected/62510a5a-e3d1-4606-b730-e8efb75e993c-kube-api-access-vvfq9\") pod \"nginx-deployment-7fcdb87857-dd9k6\" (UID: \"62510a5a-e3d1-4606-b730-e8efb75e993c\") " pod="default/nginx-deployment-7fcdb87857-dd9k6" Sep 9 23:37:17.823196 containerd[1483]: time="2025-09-09T23:37:17.823043877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dd9k6,Uid:62510a5a-e3d1-4606-b730-e8efb75e993c,Namespace:default,Attempt:0,}" Sep 9 23:37:17.972530 containerd[1483]: time="2025-09-09T23:37:17.972471835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:17.973233 containerd[1483]: time="2025-09-09T23:37:17.973185986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 23:37:17.974501 containerd[1483]: time="2025-09-09T23:37:17.974466174Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:17.978874 containerd[1483]: time="2025-09-09T23:37:17.978819252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:17.979384 containerd[1483]: time="2025-09-09T23:37:17.979338671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.013954841s" Sep 9 23:37:17.979384 containerd[1483]: time="2025-09-09T23:37:17.979378384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 23:37:17.984026 containerd[1483]: time="2025-09-09T23:37:17.983768500Z" level=info msg="CreateContainer within sandbox \"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 23:37:17.997847 systemd-networkd[1394]: cali4c4e55f8f51: Link UP Sep 9 23:37:17.998045 systemd-networkd[1394]: cali4c4e55f8f51: Gained carrier Sep 9 23:37:18.001002 containerd[1483]: time="2025-09-09T23:37:18.000950397Z" level=info msg="CreateContainer within sandbox \"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f75b4f4b9582017967f484fe502b6cfeb0bc95e75e4c44517f96004389b06a67\"" Sep 9 23:37:18.002158 containerd[1483]: time="2025-09-09T23:37:18.001722197Z" level=info msg="StartContainer for \"f75b4f4b9582017967f484fe502b6cfeb0bc95e75e4c44517f96004389b06a67\"" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.916 [INFO][2710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0 nginx-deployment-7fcdb87857- default 62510a5a-e3d1-4606-b730-e8efb75e993c 1284 0 2025-09-09 23:37:17 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.80 nginx-deployment-7fcdb87857-dd9k6 eth0 default [] [] [kns.default ksa.default.default] cali4c4e55f8f51 [] [] }} ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.916 [INFO][2710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.950 [INFO][2725] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" HandleID="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Workload="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.950 [INFO][2725] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" HandleID="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Workload="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.80", "pod":"nginx-deployment-7fcdb87857-dd9k6", "timestamp":"2025-09-09 23:37:17.950628976 +0000 UTC"}, Hostname:"10.0.0.80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.950 [INFO][2725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.950 [INFO][2725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.950 [INFO][2725] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.80' Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.961 [INFO][2725] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.968 [INFO][2725] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.973 [INFO][2725] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.975 [INFO][2725] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.978 [INFO][2725] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.978 [INFO][2725] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.980 [INFO][2725] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4 Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.985 [INFO][2725] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.990 [INFO][2725] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.2/26] block=192.168.6.0/26 handle="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.990 [INFO][2725] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.2/26] handle="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" host="10.0.0.80" Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.990 [INFO][2725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:37:18.009544 containerd[1483]: 2025-09-09 23:37:17.990 [INFO][2725] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.2/26] IPv6=[] ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" HandleID="k8s-pod-network.03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Workload="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:17.995 [INFO][2710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"62510a5a-e3d1-4606-b730-e8efb75e993c", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-dd9k6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4c4e55f8f51", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:17.995 [INFO][2710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.2/32] ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:17.995 [INFO][2710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c4e55f8f51 ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:17.997 [INFO][2710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:17.998 [INFO][2710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"62510a5a-e3d1-4606-b730-e8efb75e993c", ResourceVersion:"1284", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4", Pod:"nginx-deployment-7fcdb87857-dd9k6", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4c4e55f8f51", MAC:"92:99:26:19:38:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:18.010019 containerd[1483]: 2025-09-09 23:37:18.007 [INFO][2710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4" Namespace="default" Pod="nginx-deployment-7fcdb87857-dd9k6" WorkloadEndpoint="10.0.0.80-k8s-nginx--deployment--7fcdb87857--dd9k6-eth0" Sep 9 23:37:18.031270 containerd[1483]: time="2025-09-09T23:37:18.031148173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:18.031270 containerd[1483]: time="2025-09-09T23:37:18.031238240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:18.031414 containerd[1483]: time="2025-09-09T23:37:18.031346512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:18.031322 systemd[1]: Started cri-containerd-f75b4f4b9582017967f484fe502b6cfeb0bc95e75e4c44517f96004389b06a67.scope - libcontainer container f75b4f4b9582017967f484fe502b6cfeb0bc95e75e4c44517f96004389b06a67. Sep 9 23:37:18.031759 containerd[1483]: time="2025-09-09T23:37:18.031455383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:18.052266 systemd[1]: Started cri-containerd-03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4.scope - libcontainer container 03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4. Sep 9 23:37:18.064273 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:37:18.072749 containerd[1483]: time="2025-09-09T23:37:18.072644077Z" level=info msg="StartContainer for \"f75b4f4b9582017967f484fe502b6cfeb0bc95e75e4c44517f96004389b06a67\" returns successfully" Sep 9 23:37:18.074660 containerd[1483]: time="2025-09-09T23:37:18.074528496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 23:37:18.084831 containerd[1483]: time="2025-09-09T23:37:18.084797204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-dd9k6,Uid:62510a5a-e3d1-4606-b730-e8efb75e993c,Namespace:default,Attempt:0,} returns sandbox id \"03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4\"" Sep 9 23:37:18.500254 systemd-networkd[1394]: cali1792b50847e: Gained IPv6LL Sep 9 23:37:18.564282 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Sep 9 23:37:18.575052 kubelet[1793]: E0909 23:37:18.575024 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:19.003057 containerd[1483]: time="2025-09-09T23:37:19.003003770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:19.004061 containerd[1483]: time="2025-09-09T23:37:19.004002251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 23:37:19.006288 containerd[1483]: time="2025-09-09T23:37:19.006246479Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:19.008464 containerd[1483]: time="2025-09-09T23:37:19.008413875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:19.009327 containerd[1483]: time="2025-09-09T23:37:19.009291720Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 934.730126ms" Sep 9 23:37:19.009327 containerd[1483]: time="2025-09-09T23:37:19.009326541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 23:37:19.010643 containerd[1483]: time="2025-09-09T23:37:19.010339678Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 23:37:19.012891 containerd[1483]: time="2025-09-09T23:37:19.012860441Z" level=info msg="CreateContainer within sandbox \"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 23:37:19.024373 containerd[1483]: time="2025-09-09T23:37:19.024326807Z" level=info msg="CreateContainer within sandbox \"3b51da6ca275e0f95ed7dbfc95d45a1777574ec5b1830de59a9c8be224b6256c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fc19dd0e604d557a4bae8dceae2df6e7c2a0aa22c3d550527526a58099229f79\"" Sep 9 23:37:19.025170 containerd[1483]: time="2025-09-09T23:37:19.025123907Z" level=info msg="StartContainer for \"fc19dd0e604d557a4bae8dceae2df6e7c2a0aa22c3d550527526a58099229f79\"" Sep 9 23:37:19.056275 systemd[1]: Started cri-containerd-fc19dd0e604d557a4bae8dceae2df6e7c2a0aa22c3d550527526a58099229f79.scope - libcontainer container fc19dd0e604d557a4bae8dceae2df6e7c2a0aa22c3d550527526a58099229f79. Sep 9 23:37:19.080745 containerd[1483]: time="2025-09-09T23:37:19.080686988Z" level=info msg="StartContainer for \"fc19dd0e604d557a4bae8dceae2df6e7c2a0aa22c3d550527526a58099229f79\" returns successfully" Sep 9 23:37:19.575962 kubelet[1793]: E0909 23:37:19.575914 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:19.652681 systemd-networkd[1394]: cali4c4e55f8f51: Gained IPv6LL Sep 9 23:37:19.710297 kubelet[1793]: I0909 23:37:19.710251 1793 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 23:37:19.710297 kubelet[1793]: I0909 23:37:19.710284 1793 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 23:37:19.787835 kubelet[1793]: I0909 23:37:19.787540 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-nhggd" podStartSLOduration=12.742009287 podStartE2EDuration="14.787524749s" podCreationTimestamp="2025-09-09 23:37:05 +0000 UTC" firstStartedPulling="2025-09-09 23:37:16.964698907 +0000 UTC m=+11.958348256" lastFinishedPulling="2025-09-09 23:37:19.010214369 +0000 UTC m=+14.003863718" observedRunningTime="2025-09-09 23:37:19.787132209 +0000 UTC m=+14.780781598" watchObservedRunningTime="2025-09-09 23:37:19.787524749 +0000 UTC m=+14.781174098" Sep 9 23:37:20.576751 kubelet[1793]: E0909 23:37:20.576687 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:21.577553 kubelet[1793]: E0909 23:37:21.577495 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:21.835225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount800319261.mount: Deactivated successfully. Sep 9 23:37:22.578078 kubelet[1793]: E0909 23:37:22.578032 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:22.702431 containerd[1483]: time="2025-09-09T23:37:22.702378835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:22.703003 containerd[1483]: time="2025-09-09T23:37:22.702952829Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69986522" Sep 9 23:37:22.703925 containerd[1483]: time="2025-09-09T23:37:22.703886099Z" level=info msg="ImageCreate event name:\"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:22.706563 containerd[1483]: time="2025-09-09T23:37:22.706536596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:22.707855 containerd[1483]: time="2025-09-09T23:37:22.707830060Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 3.697456593s" Sep 9 23:37:22.707907 containerd[1483]: time="2025-09-09T23:37:22.707860106Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 23:37:22.712527 containerd[1483]: time="2025-09-09T23:37:22.712491813Z" level=info msg="CreateContainer within sandbox \"03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 9 23:37:22.722580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1422052732.mount: Deactivated successfully. Sep 9 23:37:22.724791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035336819.mount: Deactivated successfully. Sep 9 23:37:22.724940 containerd[1483]: time="2025-09-09T23:37:22.724769635Z" level=info msg="CreateContainer within sandbox \"03be745d618cebf7e3f2847857d972f0b91cfa456358408550f723f0c77ffcf4\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d17f635542694f3bfe7f7530056ed142d5acd146f56134afc935ff0b68db037c\"" Sep 9 23:37:22.725447 containerd[1483]: time="2025-09-09T23:37:22.725421581Z" level=info msg="StartContainer for \"d17f635542694f3bfe7f7530056ed142d5acd146f56134afc935ff0b68db037c\"" Sep 9 23:37:22.801270 systemd[1]: Started cri-containerd-d17f635542694f3bfe7f7530056ed142d5acd146f56134afc935ff0b68db037c.scope - libcontainer container d17f635542694f3bfe7f7530056ed142d5acd146f56134afc935ff0b68db037c. Sep 9 23:37:22.826481 containerd[1483]: time="2025-09-09T23:37:22.826428300Z" level=info msg="StartContainer for \"d17f635542694f3bfe7f7530056ed142d5acd146f56134afc935ff0b68db037c\" returns successfully" Sep 9 23:37:23.579097 kubelet[1793]: E0909 23:37:23.579042 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:23.807328 kubelet[1793]: I0909 23:37:23.807228 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-dd9k6" podStartSLOduration=2.184015928 podStartE2EDuration="6.807211104s" podCreationTimestamp="2025-09-09 23:37:17 +0000 UTC" firstStartedPulling="2025-09-09 23:37:18.085629804 +0000 UTC m=+13.079279153" lastFinishedPulling="2025-09-09 23:37:22.70882498 +0000 UTC m=+17.702474329" observedRunningTime="2025-09-09 23:37:23.807147247 +0000 UTC m=+18.800796676" watchObservedRunningTime="2025-09-09 23:37:23.807211104 +0000 UTC m=+18.800860453" Sep 9 23:37:24.579541 kubelet[1793]: E0909 23:37:24.579474 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:25.565812 kubelet[1793]: E0909 23:37:25.565738 1793 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:25.580564 kubelet[1793]: E0909 23:37:25.580481 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:26.581210 kubelet[1793]: E0909 23:37:26.581160 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:27.581560 kubelet[1793]: E0909 23:37:27.581503 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:28.582214 kubelet[1793]: E0909 23:37:28.582162 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:29.582773 kubelet[1793]: E0909 23:37:29.582727 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:29.689885 systemd[1]: Created slice kubepods-besteffort-podd54b7e89_82ae_49d8_a9d3_0a6254c6c7be.slice - libcontainer container kubepods-besteffort-podd54b7e89_82ae_49d8_a9d3_0a6254c6c7be.slice. Sep 9 23:37:29.713031 kubelet[1793]: I0909 23:37:29.712912 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d54b7e89-82ae-49d8-a9d3-0a6254c6c7be-data\") pod \"nfs-server-provisioner-0\" (UID: \"d54b7e89-82ae-49d8-a9d3-0a6254c6c7be\") " pod="default/nfs-server-provisioner-0" Sep 9 23:37:29.713031 kubelet[1793]: I0909 23:37:29.712993 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnmn\" (UniqueName: \"kubernetes.io/projected/d54b7e89-82ae-49d8-a9d3-0a6254c6c7be-kube-api-access-nxnmn\") pod \"nfs-server-provisioner-0\" (UID: \"d54b7e89-82ae-49d8-a9d3-0a6254c6c7be\") " pod="default/nfs-server-provisioner-0" Sep 9 23:37:29.992860 containerd[1483]: time="2025-09-09T23:37:29.992799871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d54b7e89-82ae-49d8-a9d3-0a6254c6c7be,Namespace:default,Attempt:0,}" Sep 9 23:37:30.160419 systemd-networkd[1394]: cali60e51b789ff: Link UP Sep 9 23:37:30.160715 systemd-networkd[1394]: cali60e51b789ff: Gained carrier Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.082 [INFO][2986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.80-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default d54b7e89-82ae-49d8-a9d3-0a6254c6c7be 1368 0 2025-09-09 23:37:29 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.80 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.082 [INFO][2986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.111 [INFO][2994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" HandleID="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Workload="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.111 [INFO][2994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" HandleID="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Workload="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c570), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.80", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-09 23:37:30.111493323 +0000 UTC"}, Hostname:"10.0.0.80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.111 [INFO][2994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.111 [INFO][2994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.111 [INFO][2994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.80' Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.123 [INFO][2994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.129 [INFO][2994] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.135 [INFO][2994] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.140 [INFO][2994] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.143 [INFO][2994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.143 [INFO][2994] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.145 [INFO][2994] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.150 [INFO][2994] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.155 [INFO][2994] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.3/26] block=192.168.6.0/26 handle="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.155 [INFO][2994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.3/26] handle="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" host="10.0.0.80" Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.155 [INFO][2994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:37:30.174181 containerd[1483]: 2025-09-09 23:37:30.155 [INFO][2994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.3/26] IPv6=[] ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" HandleID="k8s-pod-network.35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Workload="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.174745 containerd[1483]: 2025-09-09 23:37:30.157 [INFO][2986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d54b7e89-82ae-49d8-a9d3-0a6254c6c7be", ResourceVersion:"1368", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:30.174745 containerd[1483]: 2025-09-09 23:37:30.157 [INFO][2986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.3/32] ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.174745 containerd[1483]: 2025-09-09 23:37:30.157 [INFO][2986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.174745 containerd[1483]: 2025-09-09 23:37:30.160 [INFO][2986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.175023 containerd[1483]: 2025-09-09 23:37:30.160 [INFO][2986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d54b7e89-82ae-49d8-a9d3-0a6254c6c7be", ResourceVersion:"1368", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"8e:8d:96:15:53:3c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:30.175023 containerd[1483]: 2025-09-09 23:37:30.171 [INFO][2986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.80-k8s-nfs--server--provisioner--0-eth0" Sep 9 23:37:30.190850 containerd[1483]: time="2025-09-09T23:37:30.190624708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:30.190850 containerd[1483]: time="2025-09-09T23:37:30.190686555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:30.190850 containerd[1483]: time="2025-09-09T23:37:30.190719199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:30.190850 containerd[1483]: time="2025-09-09T23:37:30.190803008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:30.211312 systemd[1]: Started cri-containerd-35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef.scope - libcontainer container 35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef. Sep 9 23:37:30.227473 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:37:30.302108 containerd[1483]: time="2025-09-09T23:37:30.301983704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d54b7e89-82ae-49d8-a9d3-0a6254c6c7be,Namespace:default,Attempt:0,} returns sandbox id \"35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef\"" Sep 9 23:37:30.304915 containerd[1483]: time="2025-09-09T23:37:30.304690334Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 9 23:37:30.583845 kubelet[1793]: E0909 23:37:30.583712 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:31.300410 systemd-networkd[1394]: cali60e51b789ff: Gained IPv6LL Sep 9 23:37:31.585627 kubelet[1793]: E0909 23:37:31.585242 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:32.066505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3179819389.mount: Deactivated successfully. Sep 9 23:37:32.585526 kubelet[1793]: E0909 23:37:32.585489 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:32.712743 kubelet[1793]: I0909 23:37:32.712693 1793 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 23:37:32.749365 systemd[1]: run-containerd-runc-k8s.io-842a7df1391236621c863c42f56b9c207593195c8a0004fb3bc3387a7e64cf8f-runc.hRygy6.mount: Deactivated successfully. Sep 9 23:37:33.586561 kubelet[1793]: E0909 23:37:33.586524 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:33.616143 containerd[1483]: time="2025-09-09T23:37:33.615868940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:33.616558 containerd[1483]: time="2025-09-09T23:37:33.616523083Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Sep 9 23:37:33.617580 containerd[1483]: time="2025-09-09T23:37:33.617541462Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:33.626290 containerd[1483]: time="2025-09-09T23:37:33.626179378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:33.627427 containerd[1483]: time="2025-09-09T23:37:33.627289205Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.322546225s" Sep 9 23:37:33.627427 containerd[1483]: time="2025-09-09T23:37:33.627320648Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 9 23:37:33.648542 containerd[1483]: time="2025-09-09T23:37:33.648499778Z" level=info msg="CreateContainer within sandbox \"35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 9 23:37:33.720171 containerd[1483]: time="2025-09-09T23:37:33.720080424Z" level=info msg="CreateContainer within sandbox \"35d1dd99990ecf0d513e3f427b917503f791c426bb35116fdcfc4d40dd7c87ef\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"c0ed80f333e1035dd91d7d33d777e234c2a5119a9c780117bcb051d8a95197a1\"" Sep 9 23:37:33.721010 containerd[1483]: time="2025-09-09T23:37:33.720844778Z" level=info msg="StartContainer for \"c0ed80f333e1035dd91d7d33d777e234c2a5119a9c780117bcb051d8a95197a1\"" Sep 9 23:37:33.768264 systemd[1]: Started cri-containerd-c0ed80f333e1035dd91d7d33d777e234c2a5119a9c780117bcb051d8a95197a1.scope - libcontainer container c0ed80f333e1035dd91d7d33d777e234c2a5119a9c780117bcb051d8a95197a1. Sep 9 23:37:33.792495 containerd[1483]: time="2025-09-09T23:37:33.792451986Z" level=info msg="StartContainer for \"c0ed80f333e1035dd91d7d33d777e234c2a5119a9c780117bcb051d8a95197a1\" returns successfully" Sep 9 23:37:33.835759 kubelet[1793]: I0909 23:37:33.835588 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.5104926779999999 podStartE2EDuration="4.835572599s" podCreationTimestamp="2025-09-09 23:37:29 +0000 UTC" firstStartedPulling="2025-09-09 23:37:30.304187236 +0000 UTC m=+25.297836545" lastFinishedPulling="2025-09-09 23:37:33.629267117 +0000 UTC m=+28.622916466" observedRunningTime="2025-09-09 23:37:33.835192202 +0000 UTC m=+28.828841551" watchObservedRunningTime="2025-09-09 23:37:33.835572599 +0000 UTC m=+28.829221948" Sep 9 23:37:34.587149 kubelet[1793]: E0909 23:37:34.587095 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:35.588126 kubelet[1793]: E0909 23:37:35.588071 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:36.589104 kubelet[1793]: E0909 23:37:36.589043 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:37.589388 kubelet[1793]: E0909 23:37:37.589329 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:38.590170 kubelet[1793]: E0909 23:37:38.590122 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:39.055655 systemd[1]: Created slice kubepods-besteffort-pod51e7a6a5_90e5_4add_a726_a056b5205d59.slice - libcontainer container kubepods-besteffort-pod51e7a6a5_90e5_4add_a726_a056b5205d59.slice. Sep 9 23:37:39.070904 kubelet[1793]: I0909 23:37:39.070867 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-facec9ef-7bb1-4313-915a-cf78a4bce850\" (UniqueName: \"kubernetes.io/nfs/51e7a6a5-90e5-4add-a726-a056b5205d59-pvc-facec9ef-7bb1-4313-915a-cf78a4bce850\") pod \"test-pod-1\" (UID: \"51e7a6a5-90e5-4add-a726-a056b5205d59\") " pod="default/test-pod-1" Sep 9 23:37:39.070904 kubelet[1793]: I0909 23:37:39.070914 1793 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scphw\" (UniqueName: \"kubernetes.io/projected/51e7a6a5-90e5-4add-a726-a056b5205d59-kube-api-access-scphw\") pod \"test-pod-1\" (UID: \"51e7a6a5-90e5-4add-a726-a056b5205d59\") " pod="default/test-pod-1" Sep 9 23:37:39.196110 kernel: FS-Cache: Loaded Sep 9 23:37:39.220323 kernel: RPC: Registered named UNIX socket transport module. Sep 9 23:37:39.220431 kernel: RPC: Registered udp transport module. Sep 9 23:37:39.220450 kernel: RPC: Registered tcp transport module. Sep 9 23:37:39.220468 kernel: RPC: Registered tcp-with-tls transport module. Sep 9 23:37:39.222167 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 9 23:37:39.379114 kernel: NFS: Registering the id_resolver key type Sep 9 23:37:39.379227 kernel: Key type id_resolver registered Sep 9 23:37:39.379249 kernel: Key type id_legacy registered Sep 9 23:37:39.397237 nfsidmap[3223]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 23:37:39.399035 nfsidmap[3224]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 23:37:39.590757 kubelet[1793]: E0909 23:37:39.590720 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:39.659964 containerd[1483]: time="2025-09-09T23:37:39.659534559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:51e7a6a5-90e5-4add-a726-a056b5205d59,Namespace:default,Attempt:0,}" Sep 9 23:37:39.792055 systemd-networkd[1394]: cali5ec59c6bf6e: Link UP Sep 9 23:37:39.792265 systemd-networkd[1394]: cali5ec59c6bf6e: Gained carrier Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.710 [INFO][3226] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.80-k8s-test--pod--1-eth0 default 51e7a6a5-90e5-4add-a726-a056b5205d59 1432 0 2025-09-09 23:37:30 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.80 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.710 [INFO][3226] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.736 [INFO][3239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" HandleID="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Workload="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.736 [INFO][3239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" HandleID="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Workload="10.0.0.80-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137670), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.80", "pod":"test-pod-1", "timestamp":"2025-09-09 23:37:39.736399821 +0000 UTC"}, Hostname:"10.0.0.80", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.736 [INFO][3239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.736 [INFO][3239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.736 [INFO][3239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.80' Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.751 [INFO][3239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.757 [INFO][3239] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.762 [INFO][3239] ipam/ipam.go 511: Trying affinity for 192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.764 [INFO][3239] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.768 [INFO][3239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.0/26 host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.768 [INFO][3239] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.0/26 handle="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.770 [INFO][3239] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9 Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.778 [INFO][3239] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.0/26 handle="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.787 [INFO][3239] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.4/26] block=192.168.6.0/26 handle="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.787 [INFO][3239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.4/26] handle="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" host="10.0.0.80" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.788 [INFO][3239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.788 [INFO][3239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.4/26] IPv6=[] ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" HandleID="k8s-pod-network.de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Workload="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.802567 containerd[1483]: 2025-09-09 23:37:39.789 [INFO][3226] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"51e7a6a5-90e5-4add-a726-a056b5205d59", ResourceVersion:"1432", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:39.803153 containerd[1483]: 2025-09-09 23:37:39.789 [INFO][3226] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.4/32] ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.803153 containerd[1483]: 2025-09-09 23:37:39.790 [INFO][3226] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.803153 containerd[1483]: 2025-09-09 23:37:39.791 [INFO][3226] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.803153 containerd[1483]: 2025-09-09 23:37:39.793 [INFO][3226] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.80-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"51e7a6a5-90e5-4add-a726-a056b5205d59", ResourceVersion:"1432", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 23, 37, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.80", ContainerID:"de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e6:72:40:2c:cf:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 23:37:39.803153 containerd[1483]: 2025-09-09 23:37:39.800 [INFO][3226] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.80-k8s-test--pod--1-eth0" Sep 9 23:37:39.823889 containerd[1483]: time="2025-09-09T23:37:39.823482919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 23:37:39.823889 containerd[1483]: time="2025-09-09T23:37:39.823852465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 23:37:39.823889 containerd[1483]: time="2025-09-09T23:37:39.823865866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:39.824068 containerd[1483]: time="2025-09-09T23:37:39.823953192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 23:37:39.843288 systemd[1]: Started cri-containerd-de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9.scope - libcontainer container de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9. Sep 9 23:37:39.855822 systemd-resolved[1315]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 23:37:39.873494 containerd[1483]: time="2025-09-09T23:37:39.873445257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:51e7a6a5-90e5-4add-a726-a056b5205d59,Namespace:default,Attempt:0,} returns sandbox id \"de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9\"" Sep 9 23:37:39.875684 containerd[1483]: time="2025-09-09T23:37:39.875652212Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 23:37:40.123011 containerd[1483]: time="2025-09-09T23:37:40.122962701Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 23:37:40.123871 containerd[1483]: time="2025-09-09T23:37:40.123687630Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 9 23:37:40.127149 containerd[1483]: time="2025-09-09T23:37:40.126992609Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 251.301795ms" Sep 9 23:37:40.127149 containerd[1483]: time="2025-09-09T23:37:40.127036492Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 23:37:40.131280 containerd[1483]: time="2025-09-09T23:37:40.131242692Z" level=info msg="CreateContainer within sandbox \"de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 9 23:37:40.144056 containerd[1483]: time="2025-09-09T23:37:40.143990499Z" level=info msg="CreateContainer within sandbox \"de0d8246f83bef25c510bea2f32cac4897ac1de7424a572db4def033fd80c2a9\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"887e10f60cd90b51f48bbd236edf0682d6c91d4714c878f8f0f5e5682562f120\"" Sep 9 23:37:40.144899 containerd[1483]: time="2025-09-09T23:37:40.144858517Z" level=info msg="StartContainer for \"887e10f60cd90b51f48bbd236edf0682d6c91d4714c878f8f0f5e5682562f120\"" Sep 9 23:37:40.174322 systemd[1]: Started cri-containerd-887e10f60cd90b51f48bbd236edf0682d6c91d4714c878f8f0f5e5682562f120.scope - libcontainer container 887e10f60cd90b51f48bbd236edf0682d6c91d4714c878f8f0f5e5682562f120. Sep 9 23:37:40.203813 containerd[1483]: time="2025-09-09T23:37:40.203748593Z" level=info msg="StartContainer for \"887e10f60cd90b51f48bbd236edf0682d6c91d4714c878f8f0f5e5682562f120\" returns successfully" Sep 9 23:37:40.591630 kubelet[1793]: E0909 23:37:40.591467 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:40.880069 kubelet[1793]: I0909 23:37:40.880007 1793 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=10.627168976 podStartE2EDuration="10.879990275s" podCreationTimestamp="2025-09-09 23:37:30 +0000 UTC" firstStartedPulling="2025-09-09 23:37:39.874884398 +0000 UTC m=+34.868533707" lastFinishedPulling="2025-09-09 23:37:40.127705657 +0000 UTC m=+35.121355006" observedRunningTime="2025-09-09 23:37:40.879591289 +0000 UTC m=+35.873240638" watchObservedRunningTime="2025-09-09 23:37:40.879990275 +0000 UTC m=+35.873639624" Sep 9 23:37:40.900237 systemd-networkd[1394]: cali5ec59c6bf6e: Gained IPv6LL Sep 9 23:37:41.592307 kubelet[1793]: E0909 23:37:41.592260 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 23:37:42.119943 update_engine[1469]: I20250909 23:37:42.119361 1469 update_attempter.cc:509] Updating boot flags... Sep 9 23:37:42.159101 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3220) Sep 9 23:37:42.215114 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3364) Sep 9 23:37:42.593213 kubelet[1793]: E0909 23:37:42.593068 1793 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"