Sep 8 23:46:05.841757 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:46:05.841779 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Sep 8 22:15:05 -00 2025 Sep 8 23:46:05.841789 kernel: KASLR enabled Sep 8 23:46:05.841794 kernel: efi: EFI v2.7 by EDK II Sep 8 23:46:05.841800 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Sep 8 23:46:05.841805 kernel: random: crng init done Sep 8 23:46:05.841812 kernel: secureboot: Secure boot disabled Sep 8 23:46:05.841818 kernel: ACPI: Early table checksum verification disabled Sep 8 23:46:05.841824 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Sep 8 23:46:05.841831 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:46:05.841837 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841843 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841849 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841854 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841862 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841870 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841876 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841882 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841888 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:46:05.841894 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:46:05.841900 kernel: NUMA: Failed to initialise from firmware Sep 8 23:46:05.841906 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:05.841912 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Sep 8 23:46:05.841918 kernel: Zone ranges: Sep 8 23:46:05.841924 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:05.841932 kernel: DMA32 empty Sep 8 23:46:05.841938 kernel: Normal empty Sep 8 23:46:05.841944 kernel: Movable zone start for each node Sep 8 23:46:05.841950 kernel: Early memory node ranges Sep 8 23:46:05.841956 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Sep 8 23:46:05.841962 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Sep 8 23:46:05.841968 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Sep 8 23:46:05.841973 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 8 23:46:05.841980 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 8 23:46:05.841985 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:46:05.841991 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:46:05.841997 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:46:05.842005 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:46:05.842011 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:46:05.842017 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:46:05.842026 kernel: psci: probing for conduit method from ACPI. Sep 8 23:46:05.842032 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:46:05.842039 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:46:05.842046 kernel: psci: Trusted OS migration not required Sep 8 23:46:05.842053 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:46:05.842059 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:46:05.842066 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 8 23:46:05.842072 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 8 23:46:05.842079 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:46:05.842086 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:46:05.842092 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:46:05.842106 kernel: CPU features: detected: Hardware dirty bit management Sep 8 23:46:05.842112 kernel: CPU features: detected: Spectre-v4 Sep 8 23:46:05.842127 kernel: CPU features: detected: Spectre-BHB Sep 8 23:46:05.842136 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:46:05.842151 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:46:05.842157 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:46:05.842172 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:46:05.842178 kernel: alternatives: applying boot alternatives Sep 8 23:46:05.842186 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:46:05.842193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:46:05.842199 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:46:05.842206 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:46:05.842212 kernel: Fallback order for Node 0: 0 Sep 8 23:46:05.842220 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 8 23:46:05.842226 kernel: Policy zone: DMA Sep 8 23:46:05.842233 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:46:05.842239 kernel: software IO TLB: area num 4. Sep 8 23:46:05.842245 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 8 23:46:05.842252 kernel: Memory: 2387420K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38400K init, 897K bss, 184868K reserved, 0K cma-reserved) Sep 8 23:46:05.842259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:46:05.842265 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:46:05.842272 kernel: rcu: RCU event tracing is enabled. Sep 8 23:46:05.842279 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:46:05.842285 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:46:05.842292 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:46:05.842300 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:46:05.842306 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:46:05.842318 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:46:05.842326 kernel: GICv3: 256 SPIs implemented Sep 8 23:46:05.842337 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:46:05.842346 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:46:05.842361 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:46:05.842368 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:46:05.842375 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:46:05.842381 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:46:05.842388 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:46:05.842397 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 8 23:46:05.842404 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 8 23:46:05.842410 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:46:05.842417 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:05.842423 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:46:05.842430 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:46:05.842437 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:46:05.842443 kernel: arm-pv: using stolen time PV Sep 8 23:46:05.842450 kernel: Console: colour dummy device 80x25 Sep 8 23:46:05.842456 kernel: ACPI: Core revision 20230628 Sep 8 23:46:05.842463 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:46:05.842472 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:46:05.842479 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 8 23:46:05.842485 kernel: landlock: Up and running. Sep 8 23:46:05.842492 kernel: SELinux: Initializing. Sep 8 23:46:05.842499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:46:05.842505 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:46:05.842512 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:46:05.842519 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:46:05.842525 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:46:05.842534 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:46:05.842540 kernel: Platform MSI: ITS@0x8080000 domain created Sep 8 23:46:05.842547 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 8 23:46:05.842553 kernel: Remapping and enabling EFI services. Sep 8 23:46:05.842560 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:46:05.842566 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:46:05.842573 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:46:05.842580 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 8 23:46:05.842587 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:05.842595 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:46:05.842601 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:46:05.842613 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:46:05.842621 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 8 23:46:05.842628 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:05.842635 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:46:05.842642 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:46:05.842648 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:46:05.842656 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 8 23:46:05.842664 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:46:05.842671 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:46:05.842678 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:46:05.842685 kernel: SMP: Total of 4 processors activated. Sep 8 23:46:05.842691 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:46:05.842705 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:46:05.842712 kernel: CPU features: detected: Common not Private translations Sep 8 23:46:05.842719 kernel: CPU features: detected: CRC32 instructions Sep 8 23:46:05.842728 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:46:05.842735 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:46:05.842742 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:46:05.842749 kernel: CPU features: detected: Privileged Access Never Sep 8 23:46:05.842756 kernel: CPU features: detected: RAS Extension Support Sep 8 23:46:05.842762 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:46:05.842769 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:46:05.842776 kernel: alternatives: applying system-wide alternatives Sep 8 23:46:05.842783 kernel: devtmpfs: initialized Sep 8 23:46:05.842792 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:46:05.842799 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:46:05.842805 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:46:05.842812 kernel: SMBIOS 3.0.0 present. Sep 8 23:46:05.842819 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:46:05.842826 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:46:05.842833 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:46:05.842840 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:46:05.842847 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:46:05.842856 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:46:05.842863 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Sep 8 23:46:05.842870 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:46:05.842876 kernel: cpuidle: using governor menu Sep 8 23:46:05.842883 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:46:05.842890 kernel: ASID allocator initialised with 32768 entries Sep 8 23:46:05.842897 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:46:05.842904 kernel: Serial: AMBA PL011 UART driver Sep 8 23:46:05.842911 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:46:05.842919 kernel: Modules: 0 pages in range for non-PLT usage Sep 8 23:46:05.842926 kernel: Modules: 509248 pages in range for PLT usage Sep 8 23:46:05.842933 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:46:05.842940 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:46:05.842947 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:46:05.842954 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:46:05.842961 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:46:05.842968 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:46:05.842975 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:46:05.842983 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:46:05.842990 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:46:05.842997 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:46:05.843004 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:46:05.843011 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:46:05.843018 kernel: ACPI: Interpreter enabled Sep 8 23:46:05.843024 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:46:05.843031 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:46:05.843038 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:46:05.843045 kernel: printk: console [ttyAMA0] enabled Sep 8 23:46:05.843054 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:46:05.843197 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:46:05.843269 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:46:05.843335 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:46:05.843410 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:46:05.843473 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:46:05.843482 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:46:05.843492 kernel: PCI host bridge to bus 0000:00 Sep 8 23:46:05.843564 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:46:05.843623 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:46:05.843679 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:46:05.843746 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:46:05.843825 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 8 23:46:05.843902 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 8 23:46:05.843968 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 8 23:46:05.844049 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 8 23:46:05.844114 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:46:05.844178 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:46:05.844243 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 8 23:46:05.844306 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 8 23:46:05.844377 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:46:05.844434 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:46:05.844490 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:46:05.844499 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:46:05.844506 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:46:05.844513 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:46:05.844520 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:46:05.844527 kernel: iommu: Default domain type: Translated Sep 8 23:46:05.844536 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:46:05.844544 kernel: efivars: Registered efivars operations Sep 8 23:46:05.844550 kernel: vgaarb: loaded Sep 8 23:46:05.844557 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:46:05.844564 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:46:05.844572 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:46:05.844578 kernel: pnp: PnP ACPI init Sep 8 23:46:05.844653 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:46:05.844665 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:46:05.844672 kernel: NET: Registered PF_INET protocol family Sep 8 23:46:05.844679 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:46:05.844686 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:46:05.844700 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:46:05.844708 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:46:05.844715 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:46:05.844722 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:46:05.844729 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:46:05.844738 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:46:05.844745 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:46:05.844752 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:46:05.844759 kernel: kvm [1]: HYP mode not available Sep 8 23:46:05.844766 kernel: Initialise system trusted keyrings Sep 8 23:46:05.844773 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:46:05.844780 kernel: Key type asymmetric registered Sep 8 23:46:05.844787 kernel: Asymmetric key parser 'x509' registered Sep 8 23:46:05.844794 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 8 23:46:05.844802 kernel: io scheduler mq-deadline registered Sep 8 23:46:05.844809 kernel: io scheduler kyber registered Sep 8 23:46:05.844822 kernel: io scheduler bfq registered Sep 8 23:46:05.844829 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:46:05.844836 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:46:05.844843 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:46:05.844910 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:46:05.844920 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:46:05.844927 kernel: thunder_xcv, ver 1.0 Sep 8 23:46:05.844934 kernel: thunder_bgx, ver 1.0 Sep 8 23:46:05.844943 kernel: nicpf, ver 1.0 Sep 8 23:46:05.844950 kernel: nicvf, ver 1.0 Sep 8 23:46:05.845023 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:46:05.845084 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:46:05 UTC (1757375165) Sep 8 23:46:05.845094 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:46:05.845101 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 8 23:46:05.845108 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 8 23:46:05.845117 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:46:05.845124 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:46:05.845130 kernel: Segment Routing with IPv6 Sep 8 23:46:05.845137 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:46:05.845144 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:46:05.845151 kernel: Key type dns_resolver registered Sep 8 23:46:05.845158 kernel: registered taskstats version 1 Sep 8 23:46:05.845165 kernel: Loading compiled-in X.509 certificates Sep 8 23:46:05.845172 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: 98feb45e0c7a714eab78dfe8a165eb91758e42e9' Sep 8 23:46:05.845179 kernel: Key type .fscrypt registered Sep 8 23:46:05.845187 kernel: Key type fscrypt-provisioning registered Sep 8 23:46:05.845194 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:46:05.845201 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:46:05.845208 kernel: ima: No architecture policies found Sep 8 23:46:05.845215 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:46:05.845222 kernel: clk: Disabling unused clocks Sep 8 23:46:05.845229 kernel: Freeing unused kernel memory: 38400K Sep 8 23:46:05.845236 kernel: Run /init as init process Sep 8 23:46:05.845245 kernel: with arguments: Sep 8 23:46:05.845252 kernel: /init Sep 8 23:46:05.845258 kernel: with environment: Sep 8 23:46:05.845265 kernel: HOME=/ Sep 8 23:46:05.845272 kernel: TERM=linux Sep 8 23:46:05.845279 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:46:05.845287 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:46:05.845307 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:46:05.845319 systemd[1]: Detected virtualization kvm. Sep 8 23:46:05.845327 systemd[1]: Detected architecture arm64. Sep 8 23:46:05.845335 systemd[1]: Running in initrd. Sep 8 23:46:05.845342 systemd[1]: No hostname configured, using default hostname. Sep 8 23:46:05.845391 systemd[1]: Hostname set to . Sep 8 23:46:05.845402 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:46:05.845409 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:46:05.845417 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:05.845427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:05.845435 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:46:05.845443 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:46:05.845452 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:46:05.845460 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:46:05.845469 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:46:05.845477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:46:05.845487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:05.845494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:05.845502 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:46:05.845510 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:46:05.845517 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:46:05.845524 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:46:05.845532 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:46:05.845539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:46:05.845547 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:46:05.845556 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:46:05.845564 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:05.845572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:05.845579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:05.845587 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:46:05.845595 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:46:05.845603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:46:05.845610 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:46:05.845619 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:46:05.845627 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:46:05.845634 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:46:05.845642 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:05.845650 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:46:05.845658 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:05.845667 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:46:05.845699 systemd-journald[239]: Collecting audit messages is disabled. Sep 8 23:46:05.845718 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:46:05.845728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:05.845737 systemd-journald[239]: Journal started Sep 8 23:46:05.845755 systemd-journald[239]: Runtime Journal (/run/log/journal/b2bdf37446d7412bae58cd08f3ba1df1) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:46:05.838262 systemd-modules-load[240]: Inserted module 'overlay' Sep 8 23:46:05.847399 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:46:05.848391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:46:05.851586 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:05.854427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:46:05.854522 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:46:05.856817 kernel: Bridge firewalling registered Sep 8 23:46:05.855001 systemd-modules-load[240]: Inserted module 'br_netfilter' Sep 8 23:46:05.857523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:46:05.858898 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:05.861557 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:46:05.864637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:05.867268 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:05.874504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:05.875616 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:05.884496 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:46:05.886563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:46:05.896041 dracut-cmdline[276]: dracut-dracut-053 Sep 8 23:46:05.898462 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f8ee138b57942e58b3c347ed7ca25a0f850922d10215402a17b15b614c872007 Sep 8 23:46:05.916798 systemd-resolved[279]: Positive Trust Anchors: Sep 8 23:46:05.916944 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:46:05.916978 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:46:05.925433 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 8 23:46:05.926642 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:46:05.927552 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:05.973388 kernel: SCSI subsystem initialized Sep 8 23:46:05.978370 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:46:05.985381 kernel: iscsi: registered transport (tcp) Sep 8 23:46:05.998386 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:46:05.998413 kernel: QLogic iSCSI HBA Driver Sep 8 23:46:06.039786 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:46:06.048503 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:46:06.065147 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:46:06.065211 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:46:06.065222 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 8 23:46:06.111376 kernel: raid6: neonx8 gen() 15791 MB/s Sep 8 23:46:06.128370 kernel: raid6: neonx4 gen() 15823 MB/s Sep 8 23:46:06.145365 kernel: raid6: neonx2 gen() 13221 MB/s Sep 8 23:46:06.162382 kernel: raid6: neonx1 gen() 10542 MB/s Sep 8 23:46:06.179375 kernel: raid6: int64x8 gen() 6776 MB/s Sep 8 23:46:06.196381 kernel: raid6: int64x4 gen() 7346 MB/s Sep 8 23:46:06.213369 kernel: raid6: int64x2 gen() 6106 MB/s Sep 8 23:46:06.230395 kernel: raid6: int64x1 gen() 5058 MB/s Sep 8 23:46:06.230454 kernel: raid6: using algorithm neonx4 gen() 15823 MB/s Sep 8 23:46:06.247381 kernel: raid6: .... xor() 12441 MB/s, rmw enabled Sep 8 23:46:06.247446 kernel: raid6: using neon recovery algorithm Sep 8 23:46:06.252558 kernel: xor: measuring software checksum speed Sep 8 23:46:06.252604 kernel: 8regs : 21579 MB/sec Sep 8 23:46:06.253657 kernel: 32regs : 21227 MB/sec Sep 8 23:46:06.253678 kernel: arm64_neon : 27955 MB/sec Sep 8 23:46:06.253688 kernel: xor: using function: arm64_neon (27955 MB/sec) Sep 8 23:46:06.301408 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:46:06.313420 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:46:06.328555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:06.341501 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 8 23:46:06.345113 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:06.348021 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:46:06.362777 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 8 23:46:06.389361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:46:06.404597 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:46:06.445812 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:06.454531 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:46:06.465759 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:46:06.467632 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:46:06.468974 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:06.470824 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:46:06.476484 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:46:06.486403 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:06.502156 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:46:06.502346 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:46:06.509703 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:46:06.509799 kernel: GPT:9289727 != 19775487 Sep 8 23:46:06.509814 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:46:06.509823 kernel: GPT:9289727 != 19775487 Sep 8 23:46:06.510537 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:46:06.510612 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:06.512527 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:46:06.512643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:06.518709 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:06.521536 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:46:06.521707 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:06.524509 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:06.533631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:06.537380 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (508) Sep 8 23:46:06.540377 kernel: BTRFS: device fsid 75950a77-34ea-4c25-8b07-0ac9de89ed80 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (521) Sep 8 23:46:06.545425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:46:06.547491 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:06.569119 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:46:06.575269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:46:06.576456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:46:06.584521 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:46:06.596541 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:46:06.600558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:46:06.602889 disk-uuid[550]: Primary Header is updated. Sep 8 23:46:06.602889 disk-uuid[550]: Secondary Entries is updated. Sep 8 23:46:06.602889 disk-uuid[550]: Secondary Header is updated. Sep 8 23:46:06.608991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:06.611369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:06.621341 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:07.613402 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:46:07.613661 disk-uuid[551]: The operation has completed successfully. Sep 8 23:46:07.667764 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:46:07.667859 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:46:07.692558 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:46:07.700448 sh[575]: Success Sep 8 23:46:07.714379 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 8 23:46:07.766223 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:46:07.768479 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:46:07.769237 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:46:07.785769 kernel: BTRFS info (device dm-0): first mount of filesystem 75950a77-34ea-4c25-8b07-0ac9de89ed80 Sep 8 23:46:07.785814 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:07.785824 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 8 23:46:07.787420 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:46:07.787458 kernel: BTRFS info (device dm-0): using free space tree Sep 8 23:46:07.791765 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:46:07.795287 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:46:07.807762 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:46:07.811385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:46:07.831849 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:07.831901 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:07.831911 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:07.836388 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:07.841370 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:07.846521 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:46:07.855579 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:46:07.931399 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:07.936931 ignition[667]: Ignition 2.20.0 Sep 8 23:46:07.936939 ignition[667]: Stage: fetch-offline Sep 8 23:46:07.939965 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:07.936973 ignition[667]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:07.936982 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:07.937131 ignition[667]: parsed url from cmdline: "" Sep 8 23:46:07.937134 ignition[667]: no config URL provided Sep 8 23:46:07.937139 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:46:07.937145 ignition[667]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:46:07.937168 ignition[667]: op(1): [started] loading QEMU firmware config module Sep 8 23:46:07.937172 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:46:07.943854 ignition[667]: op(1): [finished] loading QEMU firmware config module Sep 8 23:46:07.954245 ignition[667]: parsing config with SHA512: c22513497c91d3b9d180bab11203f3571f9fa70bc4eecd721a2837a7391ca33bc47bdaea9ff5904b4ea18e2de7a0b42b27daf15edec026b6a179a3b527d58d50 Sep 8 23:46:07.957711 unknown[667]: fetched base config from "system" Sep 8 23:46:07.957721 unknown[667]: fetched user config from "qemu" Sep 8 23:46:07.957973 ignition[667]: fetch-offline: fetch-offline passed Sep 8 23:46:07.959903 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:07.958052 ignition[667]: Ignition finished successfully Sep 8 23:46:07.966090 systemd-networkd[764]: lo: Link UP Sep 8 23:46:07.966101 systemd-networkd[764]: lo: Gained carrier Sep 8 23:46:07.967751 systemd-networkd[764]: Enumeration completed Sep 8 23:46:07.967879 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:07.968276 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:07.968280 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:07.969417 systemd-networkd[764]: eth0: Link UP Sep 8 23:46:07.969421 systemd-networkd[764]: eth0: Gained carrier Sep 8 23:46:07.969427 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:07.969631 systemd[1]: Reached target network.target - Network. Sep 8 23:46:07.970997 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:46:07.981429 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:07.981516 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:46:07.996115 ignition[769]: Ignition 2.20.0 Sep 8 23:46:07.996126 ignition[769]: Stage: kargs Sep 8 23:46:07.996286 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:07.996295 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:07.996964 ignition[769]: kargs: kargs passed Sep 8 23:46:07.997007 ignition[769]: Ignition finished successfully Sep 8 23:46:08.002021 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:46:08.009601 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:46:08.021545 ignition[779]: Ignition 2.20.0 Sep 8 23:46:08.021555 ignition[779]: Stage: disks Sep 8 23:46:08.021734 ignition[779]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:08.024069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:46:08.021744 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:08.025155 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:08.022450 ignition[779]: disks: disks passed Sep 8 23:46:08.026410 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:46:08.022496 ignition[779]: Ignition finished successfully Sep 8 23:46:08.028063 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:08.029555 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:08.030692 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:08.042602 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:46:08.056388 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 8 23:46:08.060305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:46:08.066469 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:46:08.117376 kernel: EXT4-fs (vda9): mounted filesystem 3b93848a-00fd-42cd-b996-7bf357d8ae77 r/w with ordered data mode. Quota mode: none. Sep 8 23:46:08.118047 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:46:08.119255 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:08.130448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:08.132728 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:46:08.133645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:46:08.133727 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:46:08.133755 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:08.139889 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:46:08.142556 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (796) Sep 8 23:46:08.142944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:46:08.146504 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:08.146537 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:08.146548 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:08.154379 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:08.155728 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:08.188152 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:46:08.192211 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:46:08.198120 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:46:08.202031 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:46:08.275398 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:08.286474 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:46:08.288125 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:46:08.297388 kernel: BTRFS info (device vda6): last unmount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:08.309866 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:46:08.321108 ignition[909]: INFO : Ignition 2.20.0 Sep 8 23:46:08.321108 ignition[909]: INFO : Stage: mount Sep 8 23:46:08.322533 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:08.322533 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:08.322533 ignition[909]: INFO : mount: mount passed Sep 8 23:46:08.322533 ignition[909]: INFO : Ignition finished successfully Sep 8 23:46:08.324162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:46:08.332572 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:46:08.943231 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:46:08.964560 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:46:08.972048 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (925) Sep 8 23:46:08.972080 kernel: BTRFS info (device vda6): first mount of filesystem d1572d90-6486-4786-a65f-57e67d2def1a Sep 8 23:46:08.973265 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:46:08.973279 kernel: BTRFS info (device vda6): using free space tree Sep 8 23:46:08.976370 kernel: BTRFS info (device vda6): auto enabling async discard Sep 8 23:46:08.977283 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:46:09.000198 ignition[942]: INFO : Ignition 2.20.0 Sep 8 23:46:09.000198 ignition[942]: INFO : Stage: files Sep 8 23:46:09.001737 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:09.001737 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:09.001737 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:46:09.004838 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:46:09.004838 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:46:09.004838 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:46:09.008480 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:46:09.008480 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:09.008480 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 8 23:46:09.005386 unknown[942]: wrote ssh authorized keys file for user: core Sep 8 23:46:09.122515 systemd-networkd[764]: eth0: Gained IPv6LL Sep 8 23:46:09.312805 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 8 23:46:09.771027 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 8 23:46:09.771027 ignition[942]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 8 23:46:09.775657 ignition[942]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:09.775657 ignition[942]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:46:09.775657 ignition[942]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 8 23:46:09.775657 ignition[942]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:09.800564 ignition[942]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:09.804110 ignition[942]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:46:09.805558 ignition[942]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:46:09.805558 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:09.805558 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:46:09.805558 ignition[942]: INFO : files: files passed Sep 8 23:46:09.805558 ignition[942]: INFO : Ignition finished successfully Sep 8 23:46:09.807467 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:46:09.825707 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:46:09.829203 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:46:09.831982 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:46:09.832104 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:46:09.837188 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:46:09.840574 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:09.840574 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:09.843576 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:46:09.844170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:09.845871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:46:09.854577 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:46:09.872606 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:46:09.872748 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:46:09.874640 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:46:09.876121 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:46:09.877601 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:46:09.878423 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:46:09.892713 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:09.910561 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:46:09.919484 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:09.920535 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:09.922300 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:46:09.923849 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:46:09.923975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:46:09.928021 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:46:09.929701 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:46:09.931006 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:46:09.932281 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:46:09.933855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:46:09.935290 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:46:09.936752 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:46:09.938373 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:46:09.940172 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:46:09.941633 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:46:09.942905 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:46:09.943041 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:46:09.945009 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:09.946738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:09.948315 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:46:09.951429 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:09.952409 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:46:09.952536 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:46:09.955006 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:46:09.955122 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:46:09.956761 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:46:09.958044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:46:09.963406 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:09.965457 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:46:09.966195 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:46:09.967427 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:46:09.967518 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:46:09.968902 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:46:09.968978 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:46:09.970257 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:46:09.970381 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:46:09.971868 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:46:09.971969 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:46:09.990558 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:46:09.991267 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:46:09.991418 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:09.996602 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:46:09.997268 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:46:09.997411 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:09.998967 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:46:09.999067 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:46:10.002697 ignition[999]: INFO : Ignition 2.20.0 Sep 8 23:46:10.002697 ignition[999]: INFO : Stage: umount Sep 8 23:46:10.002697 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:46:10.002697 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:46:10.002697 ignition[999]: INFO : umount: umount passed Sep 8 23:46:10.002697 ignition[999]: INFO : Ignition finished successfully Sep 8 23:46:10.003906 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:46:10.004005 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:46:10.007942 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:46:10.008043 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:46:10.010024 systemd[1]: Stopped target network.target - Network. Sep 8 23:46:10.011445 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:46:10.011520 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:46:10.012900 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:46:10.012943 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:46:10.014257 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:46:10.014297 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:46:10.016427 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:46:10.016472 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:46:10.018503 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:46:10.019709 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:46:10.022865 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:46:10.026892 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:46:10.027169 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:46:10.034012 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:46:10.034614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:46:10.034714 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:10.037930 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:46:10.038197 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:46:10.038400 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:46:10.041797 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:46:10.042365 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:46:10.042427 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:10.054505 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:46:10.055249 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:46:10.055316 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:46:10.057072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:46:10.057118 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:10.059405 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:46:10.059451 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:10.060966 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:10.063800 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:46:10.069755 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:46:10.069899 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:46:10.077074 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:46:10.077224 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:10.079575 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:46:10.079612 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:10.081409 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:46:10.081438 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:10.083292 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:46:10.083338 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:46:10.086111 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:46:10.086153 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:46:10.088742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:46:10.088780 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:46:10.096593 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:46:10.097394 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:46:10.097452 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:10.100469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:46:10.100513 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:10.104124 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:46:10.104206 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:46:10.105880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:46:10.105949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:46:10.108155 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:46:10.109057 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:46:10.109121 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:46:10.111658 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:46:10.121547 systemd[1]: Switching root. Sep 8 23:46:10.154398 systemd-journald[239]: Journal stopped Sep 8 23:46:10.946309 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Sep 8 23:46:10.946383 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:46:10.946400 kernel: SELinux: policy capability open_perms=1 Sep 8 23:46:10.946412 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:46:10.946422 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:46:10.946431 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:46:10.946441 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:46:10.946457 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:46:10.946468 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:46:10.946477 kernel: audit: type=1403 audit(1757375170.311:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:46:10.946488 systemd[1]: Successfully loaded SELinux policy in 51.826ms. Sep 8 23:46:10.946506 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.857ms. Sep 8 23:46:10.946517 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:46:10.946528 systemd[1]: Detected virtualization kvm. Sep 8 23:46:10.946559 systemd[1]: Detected architecture arm64. Sep 8 23:46:10.946572 systemd[1]: Detected first boot. Sep 8 23:46:10.946584 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:46:10.946596 zram_generator::config[1045]: No configuration found. Sep 8 23:46:10.946607 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:46:10.946616 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:46:10.946628 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:46:10.946640 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:46:10.946651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:46:10.946661 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:46:10.946672 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:46:10.946690 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:46:10.946702 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:46:10.946714 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:46:10.946726 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:46:10.946739 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:46:10.946750 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:46:10.946760 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:46:10.946771 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:46:10.946782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:46:10.946792 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:46:10.946803 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:46:10.946815 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:46:10.946826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:46:10.946838 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:46:10.946848 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:46:10.946859 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:46:10.946869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:46:10.946879 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:46:10.946889 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:46:10.946899 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:46:10.946909 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:46:10.946922 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:46:10.946932 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:46:10.946942 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:46:10.946952 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:46:10.946962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:46:10.946971 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:46:10.946982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:46:10.946992 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:46:10.947006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:46:10.947018 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:46:10.947028 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:46:10.947039 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:46:10.947049 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:46:10.947060 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:46:10.947070 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:46:10.947080 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:46:10.947091 systemd[1]: Reached target machines.target - Containers. Sep 8 23:46:10.947101 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:46:10.947114 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:10.947124 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:46:10.947133 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:46:10.947144 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:10.947153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:10.947164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:10.947173 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:46:10.947183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:10.947196 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:46:10.947206 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:46:10.947216 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:46:10.947226 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:46:10.947236 kernel: fuse: init (API version 7.39) Sep 8 23:46:10.947246 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:46:10.947258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:10.947268 kernel: loop: module loaded Sep 8 23:46:10.947278 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:46:10.947290 kernel: ACPI: bus type drm_connector registered Sep 8 23:46:10.947300 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:46:10.947310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:46:10.947321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:46:10.947331 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:46:10.947436 systemd-journald[1117]: Collecting audit messages is disabled. Sep 8 23:46:10.947463 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:46:10.947478 systemd-journald[1117]: Journal started Sep 8 23:46:10.947499 systemd-journald[1117]: Runtime Journal (/run/log/journal/b2bdf37446d7412bae58cd08f3ba1df1) is 5.9M, max 47.3M, 41.4M free. Sep 8 23:46:10.757565 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:46:10.766336 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:46:10.766754 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:46:10.949665 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:46:10.949711 systemd[1]: Stopped verity-setup.service. Sep 8 23:46:10.954448 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:46:10.954999 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:46:10.955990 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:46:10.957006 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:46:10.957887 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:46:10.958821 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:46:10.959807 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:46:10.962381 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:46:10.963616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:46:10.964842 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:46:10.965001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:46:10.966254 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:10.966433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:10.967736 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:10.967899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:10.969039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:10.969192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:10.971638 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:46:10.971819 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:46:10.972972 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:10.973129 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:10.974469 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:46:10.975629 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:46:10.978793 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:46:10.980163 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:46:10.992280 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:46:11.002468 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:46:11.004385 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:46:11.005214 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:46:11.005251 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:46:11.007134 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:46:11.009236 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:46:11.011213 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:46:11.012269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:11.013565 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:46:11.015300 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:46:11.016446 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:11.019586 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:46:11.022490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:11.023655 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:46:11.023982 systemd-journald[1117]: Time spent on flushing to /var/log/journal/b2bdf37446d7412bae58cd08f3ba1df1 is 20.443ms for 848 entries. Sep 8 23:46:11.023982 systemd-journald[1117]: System Journal (/var/log/journal/b2bdf37446d7412bae58cd08f3ba1df1) is 8M, max 195.6M, 187.6M free. Sep 8 23:46:11.091006 systemd-journald[1117]: Received client request to flush runtime journal. Sep 8 23:46:11.091064 kernel: loop0: detected capacity change from 0 to 123192 Sep 8 23:46:11.091082 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:46:11.027025 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:46:11.031571 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:46:11.035427 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:46:11.037543 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:46:11.038727 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:46:11.040170 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:46:11.053604 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 8 23:46:11.054889 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:46:11.060823 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:46:11.062455 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:46:11.064736 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:46:11.070449 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 8 23:46:11.093441 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:46:11.096386 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:46:11.103377 kernel: loop1: detected capacity change from 0 to 207008 Sep 8 23:46:11.106630 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:46:11.109410 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:46:11.125672 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 8 23:46:11.125699 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Sep 8 23:46:11.130156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:46:11.143407 kernel: loop2: detected capacity change from 0 to 113512 Sep 8 23:46:11.193385 kernel: loop3: detected capacity change from 0 to 123192 Sep 8 23:46:11.207383 kernel: loop4: detected capacity change from 0 to 207008 Sep 8 23:46:11.215378 kernel: loop5: detected capacity change from 0 to 113512 Sep 8 23:46:11.219780 (sd-merge)[1187]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:46:11.220168 (sd-merge)[1187]: Merged extensions into '/usr'. Sep 8 23:46:11.224516 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:46:11.224532 systemd[1]: Reloading... Sep 8 23:46:11.280389 zram_generator::config[1215]: No configuration found. Sep 8 23:46:11.381013 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:46:11.400237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:11.463495 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:46:11.464110 systemd[1]: Reloading finished in 239 ms. Sep 8 23:46:11.482421 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:46:11.483752 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:46:11.497763 systemd[1]: Starting ensure-sysext.service... Sep 8 23:46:11.499518 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:46:11.507407 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:46:11.512054 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:46:11.513433 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:46:11.513452 systemd[1]: Reloading... Sep 8 23:46:11.516272 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:46:11.516508 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:46:11.517130 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:46:11.517483 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 8 23:46:11.517599 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Sep 8 23:46:11.520589 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:11.520597 systemd-tmpfiles[1250]: Skipping /boot Sep 8 23:46:11.529145 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:46:11.529165 systemd-tmpfiles[1250]: Skipping /boot Sep 8 23:46:11.536704 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Sep 8 23:46:11.562389 zram_generator::config[1284]: No configuration found. Sep 8 23:46:11.620330 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1299) Sep 8 23:46:11.687237 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:11.764033 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:46:11.764385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:46:11.765577 systemd[1]: Reloading finished in 251 ms. Sep 8 23:46:11.779237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:46:11.795784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:46:11.813418 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 8 23:46:11.814782 systemd[1]: Finished ensure-sysext.service. Sep 8 23:46:11.843546 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:11.845766 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:46:11.846832 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:46:11.847842 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 8 23:46:11.850559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:46:11.854564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:46:11.858715 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:46:11.861711 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:46:11.862651 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:46:11.864533 lvm[1347]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:46:11.865552 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:46:11.867696 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:46:11.869502 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:46:11.873651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:46:11.876643 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:46:11.881560 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:46:11.884807 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:46:11.887618 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:46:11.889552 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:46:11.889764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:46:11.891513 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:46:11.891699 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:46:11.893801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:46:11.893985 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:46:11.895642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 8 23:46:11.898310 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:46:11.898507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:46:11.901775 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:46:11.905016 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:46:11.914272 augenrules[1384]: No rules Sep 8 23:46:11.916426 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:11.916682 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:11.922026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:46:11.928563 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 8 23:46:11.929584 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:46:11.929659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:46:11.931642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:46:11.933700 lvm[1393]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 8 23:46:11.938565 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:46:11.940061 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:46:11.943894 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:46:11.945374 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:46:11.946712 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:46:11.951650 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:46:11.965403 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 8 23:46:11.978094 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:46:12.033453 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:46:12.034770 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:46:12.036156 systemd-networkd[1362]: lo: Link UP Sep 8 23:46:12.036164 systemd-networkd[1362]: lo: Gained carrier Sep 8 23:46:12.037118 systemd-networkd[1362]: Enumeration completed Sep 8 23:46:12.037218 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:46:12.037554 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:12.037563 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:46:12.038134 systemd-networkd[1362]: eth0: Link UP Sep 8 23:46:12.038140 systemd-networkd[1362]: eth0: Gained carrier Sep 8 23:46:12.038153 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:46:12.039476 systemd-resolved[1365]: Positive Trust Anchors: Sep 8 23:46:12.039741 systemd-resolved[1365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:46:12.039818 systemd-resolved[1365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:46:12.046576 systemd-resolved[1365]: Defaulting to hostname 'linux'. Sep 8 23:46:12.052549 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:46:12.054755 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:46:12.055822 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:46:12.056986 systemd[1]: Reached target network.target - Network. Sep 8 23:46:12.057802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:46:12.058782 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:46:12.059722 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:46:12.060766 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:46:12.062010 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:46:12.063110 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:46:12.064248 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:46:12.064435 systemd-networkd[1362]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:46:12.065458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:46:12.065489 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:46:12.065688 systemd-timesyncd[1371]: Network configuration changed, trying to establish connection. Sep 8 23:46:12.066517 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:46:12.068030 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:46:12.070417 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:46:12.527194 systemd-timesyncd[1371]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:46:12.527266 systemd-resolved[1365]: Clock change detected. Flushing caches. Sep 8 23:46:12.527300 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:46:12.527325 systemd-timesyncd[1371]: Initial clock synchronization to Mon 2025-09-08 23:46:12.527119 UTC. Sep 8 23:46:12.528683 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:46:12.529853 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:46:12.540979 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:46:12.542272 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:46:12.544999 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:46:12.546131 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:46:12.547759 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:46:12.548623 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:46:12.549539 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:12.549576 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:46:12.550934 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:46:12.552902 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:46:12.554796 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:46:12.556718 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:46:12.557744 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:46:12.561350 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:46:12.563221 jq[1419]: false Sep 8 23:46:12.564590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:46:12.566974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:46:12.572689 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:46:12.573539 extend-filesystems[1420]: Found loop3 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found loop4 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found loop5 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda1 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda2 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda3 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found usr Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda4 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda6 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda7 Sep 8 23:46:12.574976 extend-filesystems[1420]: Found vda9 Sep 8 23:46:12.574976 extend-filesystems[1420]: Checking size of /dev/vda9 Sep 8 23:46:12.574734 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:46:12.575289 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:46:12.576149 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:46:12.582101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:46:12.590559 jq[1430]: true Sep 8 23:46:12.591098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:46:12.591351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:46:12.591665 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:46:12.591872 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:46:12.592405 dbus-daemon[1418]: [system] SELinux support is enabled Sep 8 23:46:12.593211 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:46:12.599711 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:46:12.600229 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:46:12.605670 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:46:12.606213 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:46:12.607358 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:46:12.607449 update_engine[1429]: I20250908 23:46:12.607301 1429 main.cc:92] Flatcar Update Engine starting Sep 8 23:46:12.607653 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:46:12.609758 extend-filesystems[1420]: Resized partition /dev/vda9 Sep 8 23:46:12.611511 jq[1437]: true Sep 8 23:46:12.611867 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:46:12.617001 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1297) Sep 8 23:46:12.617051 update_engine[1429]: I20250908 23:46:12.616400 1429 update_check_scheduler.cc:74] Next update check in 5m26s Sep 8 23:46:12.616134 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:46:12.618835 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) Sep 8 23:46:12.628947 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:46:12.631192 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:46:12.647098 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:46:12.647369 systemd-logind[1425]: New seat seat0. Sep 8 23:46:12.648148 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:46:12.670537 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:46:12.695932 locksmithd[1452]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:46:12.761520 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:46:12.761520 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:46:12.761520 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:46:12.765370 extend-filesystems[1420]: Resized filesystem in /dev/vda9 Sep 8 23:46:12.764326 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:46:12.766016 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:46:12.770950 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:46:12.771599 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:46:12.773841 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:46:12.806657 containerd[1439]: time="2025-09-08T23:46:12.806560653Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Sep 8 23:46:12.832723 containerd[1439]: time="2025-09-08T23:46:12.832661453Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834102 containerd[1439]: time="2025-09-08T23:46:12.834042133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834102 containerd[1439]: time="2025-09-08T23:46:12.834076493Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 8 23:46:12.834102 containerd[1439]: time="2025-09-08T23:46:12.834094053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 8 23:46:12.834285 containerd[1439]: time="2025-09-08T23:46:12.834264773Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 8 23:46:12.834332 containerd[1439]: time="2025-09-08T23:46:12.834287813Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834360 containerd[1439]: time="2025-09-08T23:46:12.834344733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834379 containerd[1439]: time="2025-09-08T23:46:12.834359413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834580 containerd[1439]: time="2025-09-08T23:46:12.834552053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834580 containerd[1439]: time="2025-09-08T23:46:12.834573133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834623 containerd[1439]: time="2025-09-08T23:46:12.834586373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834623 containerd[1439]: time="2025-09-08T23:46:12.834595693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834691 containerd[1439]: time="2025-09-08T23:46:12.834676733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.834900 containerd[1439]: time="2025-09-08T23:46:12.834876893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 8 23:46:12.835028 containerd[1439]: time="2025-09-08T23:46:12.835011493Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 8 23:46:12.835048 containerd[1439]: time="2025-09-08T23:46:12.835028653Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 8 23:46:12.835116 containerd[1439]: time="2025-09-08T23:46:12.835103533Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 8 23:46:12.835158 containerd[1439]: time="2025-09-08T23:46:12.835147533Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:46:12.838928 containerd[1439]: time="2025-09-08T23:46:12.838885453Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 8 23:46:12.838992 containerd[1439]: time="2025-09-08T23:46:12.838952293Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 8 23:46:12.838992 containerd[1439]: time="2025-09-08T23:46:12.838978213Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 8 23:46:12.839028 containerd[1439]: time="2025-09-08T23:46:12.838995773Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 8 23:46:12.839028 containerd[1439]: time="2025-09-08T23:46:12.839012333Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 8 23:46:12.839215 containerd[1439]: time="2025-09-08T23:46:12.839191293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 8 23:46:12.839481 containerd[1439]: time="2025-09-08T23:46:12.839462613Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 8 23:46:12.839589 containerd[1439]: time="2025-09-08T23:46:12.839574373Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 8 23:46:12.839615 containerd[1439]: time="2025-09-08T23:46:12.839595133Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 8 23:46:12.839615 containerd[1439]: time="2025-09-08T23:46:12.839609493Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 8 23:46:12.839649 containerd[1439]: time="2025-09-08T23:46:12.839623373Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839649 containerd[1439]: time="2025-09-08T23:46:12.839636773Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839695 containerd[1439]: time="2025-09-08T23:46:12.839649413Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839695 containerd[1439]: time="2025-09-08T23:46:12.839662733Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839695 containerd[1439]: time="2025-09-08T23:46:12.839676013Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839695 containerd[1439]: time="2025-09-08T23:46:12.839690213Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839764 containerd[1439]: time="2025-09-08T23:46:12.839702693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839764 containerd[1439]: time="2025-09-08T23:46:12.839715133Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 8 23:46:12.839764 containerd[1439]: time="2025-09-08T23:46:12.839734653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839764 containerd[1439]: time="2025-09-08T23:46:12.839749773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839768373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839780933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839792733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839804573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839816453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839828 containerd[1439]: time="2025-09-08T23:46:12.839828373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839841693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839857173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839868173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839879773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839891133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.839936 containerd[1439]: time="2025-09-08T23:46:12.839905173Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 8 23:46:12.840047 containerd[1439]: time="2025-09-08T23:46:12.839943453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.840047 containerd[1439]: time="2025-09-08T23:46:12.839958253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.840047 containerd[1439]: time="2025-09-08T23:46:12.839968813Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 8 23:46:12.840151 containerd[1439]: time="2025-09-08T23:46:12.840136453Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 8 23:46:12.840170 containerd[1439]: time="2025-09-08T23:46:12.840157773Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 8 23:46:12.840190 containerd[1439]: time="2025-09-08T23:46:12.840168773Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 8 23:46:12.840190 containerd[1439]: time="2025-09-08T23:46:12.840181493Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 8 23:46:12.840223 containerd[1439]: time="2025-09-08T23:46:12.840191133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.840223 containerd[1439]: time="2025-09-08T23:46:12.840204173Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 8 23:46:12.840223 containerd[1439]: time="2025-09-08T23:46:12.840214093Z" level=info msg="NRI interface is disabled by configuration." Sep 8 23:46:12.840281 containerd[1439]: time="2025-09-08T23:46:12.840223853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 8 23:46:12.840626 containerd[1439]: time="2025-09-08T23:46:12.840561853Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 8 23:46:12.840626 containerd[1439]: time="2025-09-08T23:46:12.840621853Z" level=info msg="Connect containerd service" Sep 8 23:46:12.840758 containerd[1439]: time="2025-09-08T23:46:12.840659973Z" level=info msg="using legacy CRI server" Sep 8 23:46:12.840758 containerd[1439]: time="2025-09-08T23:46:12.840667493Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:46:12.840915 containerd[1439]: time="2025-09-08T23:46:12.840900973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 8 23:46:12.841710 containerd[1439]: time="2025-09-08T23:46:12.841682333Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:46:12.841912 containerd[1439]: time="2025-09-08T23:46:12.841884093Z" level=info msg="Start subscribing containerd event" Sep 8 23:46:12.842304 containerd[1439]: time="2025-09-08T23:46:12.842138613Z" level=info msg="Start recovering state" Sep 8 23:46:12.842304 containerd[1439]: time="2025-09-08T23:46:12.842222893Z" level=info msg="Start event monitor" Sep 8 23:46:12.842304 containerd[1439]: time="2025-09-08T23:46:12.842236293Z" level=info msg="Start snapshots syncer" Sep 8 23:46:12.842304 containerd[1439]: time="2025-09-08T23:46:12.842256093Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:46:12.842304 containerd[1439]: time="2025-09-08T23:46:12.842274173Z" level=info msg="Start streaming server" Sep 8 23:46:12.842816 containerd[1439]: time="2025-09-08T23:46:12.842781133Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:46:12.842861 containerd[1439]: time="2025-09-08T23:46:12.842832693Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:46:12.842998 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:46:12.844518 containerd[1439]: time="2025-09-08T23:46:12.844329853Z" level=info msg="containerd successfully booted in 0.041091s" Sep 8 23:46:13.864070 systemd-networkd[1362]: eth0: Gained IPv6LL Sep 8 23:46:13.866442 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:46:13.868208 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:46:13.878195 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:46:13.880572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:13.882702 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:46:13.902592 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:46:13.912096 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:46:13.912350 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:46:13.913801 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:46:13.974560 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:46:13.994463 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:46:14.006340 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:46:14.011966 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:46:14.012291 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:46:14.015159 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:46:14.027060 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:46:14.029867 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:46:14.032222 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:46:14.033425 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:46:14.434082 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:14.435385 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:46:14.437954 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:46:14.440101 systemd[1]: Startup finished in 518ms (kernel) + 4.615s (initrd) + 3.729s (userspace) = 8.863s. Sep 8 23:46:14.813165 kubelet[1525]: E0908 23:46:14.813041 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:46:14.815692 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:46:14.815848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:46:14.816182 systemd[1]: kubelet.service: Consumed 761ms CPU time, 260.4M memory peak. Sep 8 23:46:18.679778 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:46:18.680917 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:35964.service - OpenSSH per-connection server daemon (10.0.0.1:35964). Sep 8 23:46:18.735614 sshd[1538]: Accepted publickey for core from 10.0.0.1 port 35964 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:18.737407 sshd-session[1538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:18.743607 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:46:18.751215 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:46:18.756796 systemd-logind[1425]: New session 1 of user core. Sep 8 23:46:18.762083 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:46:18.764444 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:46:18.771543 (systemd)[1542]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:46:18.773714 systemd-logind[1425]: New session c1 of user core. Sep 8 23:46:18.871368 systemd[1542]: Queued start job for default target default.target. Sep 8 23:46:18.883020 systemd[1542]: Created slice app.slice - User Application Slice. Sep 8 23:46:18.883050 systemd[1542]: Reached target paths.target - Paths. Sep 8 23:46:18.883092 systemd[1542]: Reached target timers.target - Timers. Sep 8 23:46:18.884390 systemd[1542]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:46:18.894026 systemd[1542]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:46:18.894104 systemd[1542]: Reached target sockets.target - Sockets. Sep 8 23:46:18.894151 systemd[1542]: Reached target basic.target - Basic System. Sep 8 23:46:18.894181 systemd[1542]: Reached target default.target - Main User Target. Sep 8 23:46:18.894208 systemd[1542]: Startup finished in 114ms. Sep 8 23:46:18.894414 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:46:18.895994 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:46:18.964666 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:35978.service - OpenSSH per-connection server daemon (10.0.0.1:35978). Sep 8 23:46:19.008954 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 35978 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.010391 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.015503 systemd-logind[1425]: New session 2 of user core. Sep 8 23:46:19.025160 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:46:19.078690 sshd[1555]: Connection closed by 10.0.0.1 port 35978 Sep 8 23:46:19.078350 sshd-session[1553]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:19.088505 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:35978.service: Deactivated successfully. Sep 8 23:46:19.091277 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:46:19.092633 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:46:19.104272 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:35988.service - OpenSSH per-connection server daemon (10.0.0.1:35988). Sep 8 23:46:19.105136 systemd-logind[1425]: Removed session 2. Sep 8 23:46:19.144720 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 35988 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.146048 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.150239 systemd-logind[1425]: New session 3 of user core. Sep 8 23:46:19.158105 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:46:19.205966 sshd[1563]: Connection closed by 10.0.0.1 port 35988 Sep 8 23:46:19.206445 sshd-session[1560]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:19.221112 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:35988.service: Deactivated successfully. Sep 8 23:46:19.223429 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:46:19.224258 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:46:19.234314 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:35998.service - OpenSSH per-connection server daemon (10.0.0.1:35998). Sep 8 23:46:19.235390 systemd-logind[1425]: Removed session 3. Sep 8 23:46:19.275764 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 35998 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.277081 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.281563 systemd-logind[1425]: New session 4 of user core. Sep 8 23:46:19.289082 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:46:19.340830 sshd[1571]: Connection closed by 10.0.0.1 port 35998 Sep 8 23:46:19.341430 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:19.355749 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:35998.service: Deactivated successfully. Sep 8 23:46:19.359179 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:46:19.359999 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:46:19.368322 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:36008.service - OpenSSH per-connection server daemon (10.0.0.1:36008). Sep 8 23:46:19.370208 systemd-logind[1425]: Removed session 4. Sep 8 23:46:19.410647 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 36008 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.412186 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.416005 systemd-logind[1425]: New session 5 of user core. Sep 8 23:46:19.422084 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:46:19.478192 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:46:19.478483 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:19.491860 sudo[1580]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:19.493319 sshd[1579]: Connection closed by 10.0.0.1 port 36008 Sep 8 23:46:19.493719 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:19.505210 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:36008.service: Deactivated successfully. Sep 8 23:46:19.507614 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:46:19.509200 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:46:19.511022 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:36022.service - OpenSSH per-connection server daemon (10.0.0.1:36022). Sep 8 23:46:19.512158 systemd-logind[1425]: Removed session 5. Sep 8 23:46:19.554518 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 36022 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.555796 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.560010 systemd-logind[1425]: New session 6 of user core. Sep 8 23:46:19.569090 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:46:19.619848 sudo[1590]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:46:19.620450 sudo[1590]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:19.623453 sudo[1590]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:19.628247 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:46:19.628525 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:19.646286 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:46:19.668433 augenrules[1612]: No rules Sep 8 23:46:19.669789 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:46:19.670036 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:46:19.670913 sudo[1589]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:19.672162 sshd[1588]: Connection closed by 10.0.0.1 port 36022 Sep 8 23:46:19.672524 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:19.685278 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:36022.service: Deactivated successfully. Sep 8 23:46:19.686786 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:46:19.689132 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:46:19.702265 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). Sep 8 23:46:19.703403 systemd-logind[1425]: Removed session 6. Sep 8 23:46:19.741401 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:lgVuEL3aXZ3TBOxalJ/JJJrDh/9i9YH1dFRaYUofkN4 Sep 8 23:46:19.742623 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:46:19.747439 systemd-logind[1425]: New session 7 of user core. Sep 8 23:46:19.753166 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:46:19.804688 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:46:19.805320 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:46:19.823291 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:46:19.838525 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:46:19.838778 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:46:20.246843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:20.247002 systemd[1]: kubelet.service: Consumed 761ms CPU time, 260.4M memory peak. Sep 8 23:46:20.255183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:20.275475 systemd[1]: Reload requested from client PID 1665 ('systemctl') (unit session-7.scope)... Sep 8 23:46:20.275489 systemd[1]: Reloading... Sep 8 23:46:20.352965 zram_generator::config[1708]: No configuration found. Sep 8 23:46:20.558236 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:46:20.651591 systemd[1]: Reloading finished in 375 ms. Sep 8 23:46:20.689257 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:20.692474 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:20.693172 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:46:20.693403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:20.693447 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Sep 8 23:46:20.694935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:46:20.798997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:46:20.803407 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:46:20.842910 kubelet[1755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:20.842910 kubelet[1755]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:46:20.842910 kubelet[1755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:46:20.843200 kubelet[1755]: I0908 23:46:20.842894 1755 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:46:21.458815 kubelet[1755]: I0908 23:46:21.458775 1755 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 8 23:46:21.459951 kubelet[1755]: I0908 23:46:21.458943 1755 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:46:21.459951 kubelet[1755]: I0908 23:46:21.459265 1755 server.go:954] "Client rotation is on, will bootstrap in background" Sep 8 23:46:21.483592 kubelet[1755]: I0908 23:46:21.483556 1755 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:46:21.489239 kubelet[1755]: E0908 23:46:21.489121 1755 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 8 23:46:21.489345 kubelet[1755]: I0908 23:46:21.489248 1755 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 8 23:46:21.494290 kubelet[1755]: I0908 23:46:21.494255 1755 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:46:21.495591 kubelet[1755]: I0908 23:46:21.495432 1755 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:46:21.495822 kubelet[1755]: I0908 23:46:21.495592 1755 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:46:21.495919 kubelet[1755]: I0908 23:46:21.495892 1755 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:46:21.495919 kubelet[1755]: I0908 23:46:21.495903 1755 container_manager_linux.go:304] "Creating device plugin manager" Sep 8 23:46:21.496174 kubelet[1755]: I0908 23:46:21.496150 1755 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:21.501848 kubelet[1755]: I0908 23:46:21.501809 1755 kubelet.go:446] "Attempting to sync node with API server" Sep 8 23:46:21.501848 kubelet[1755]: I0908 23:46:21.501850 1755 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:46:21.501935 kubelet[1755]: I0908 23:46:21.501877 1755 kubelet.go:352] "Adding apiserver pod source" Sep 8 23:46:21.501935 kubelet[1755]: I0908 23:46:21.501889 1755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:46:21.503327 kubelet[1755]: E0908 23:46:21.502968 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:21.503327 kubelet[1755]: E0908 23:46:21.503273 1755 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:21.506120 kubelet[1755]: I0908 23:46:21.506096 1755 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Sep 8 23:46:21.506884 kubelet[1755]: I0908 23:46:21.506862 1755 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 8 23:46:21.507103 kubelet[1755]: W0908 23:46:21.507089 1755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:46:21.508059 kubelet[1755]: I0908 23:46:21.508037 1755 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:46:21.508188 kubelet[1755]: I0908 23:46:21.508176 1755 server.go:1287] "Started kubelet" Sep 8 23:46:21.510176 kubelet[1755]: I0908 23:46:21.510097 1755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:46:21.510515 kubelet[1755]: I0908 23:46:21.510484 1755 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:46:21.510580 kubelet[1755]: I0908 23:46:21.510557 1755 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:46:21.511397 kubelet[1755]: I0908 23:46:21.511372 1755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:46:21.512599 kubelet[1755]: I0908 23:46:21.511431 1755 server.go:479] "Adding debug handlers to kubelet server" Sep 8 23:46:21.513574 kubelet[1755]: I0908 23:46:21.513456 1755 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:46:21.515181 kubelet[1755]: E0908 23:46:21.514997 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:21.515181 kubelet[1755]: I0908 23:46:21.515049 1755 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:46:21.515288 kubelet[1755]: I0908 23:46:21.515237 1755 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:46:21.515934 kubelet[1755]: I0908 23:46:21.515306 1755 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:46:21.516271 kubelet[1755]: W0908 23:46:21.516148 1755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.53" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 8 23:46:21.516271 kubelet[1755]: E0908 23:46:21.516201 1755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.53\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 8 23:46:21.516363 kubelet[1755]: W0908 23:46:21.516285 1755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 8 23:46:21.516363 kubelet[1755]: I0908 23:46:21.516286 1755 factory.go:221] Registration of the systemd container factory successfully Sep 8 23:46:21.516446 kubelet[1755]: I0908 23:46:21.516414 1755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:46:21.516891 kubelet[1755]: E0908 23:46:21.516315 1755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 8 23:46:21.518079 kubelet[1755]: E0908 23:46:21.517967 1755 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:46:21.518371 kubelet[1755]: I0908 23:46:21.518351 1755 factory.go:221] Registration of the containerd container factory successfully Sep 8 23:46:21.526319 kubelet[1755]: E0908 23:46:21.526267 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.53\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Sep 8 23:46:21.526421 kubelet[1755]: W0908 23:46:21.526383 1755 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Sep 8 23:46:21.526448 kubelet[1755]: E0908 23:46:21.526414 1755 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Sep 8 23:46:21.527963 kubelet[1755]: E0908 23:46:21.526413 1755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1863736aa6e15d05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2025-09-08 23:46:21.508140293 +0000 UTC m=+0.701453721,LastTimestamp:2025-09-08 23:46:21.508140293 +0000 UTC m=+0.701453721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Sep 8 23:46:21.530838 kubelet[1755]: E0908 23:46:21.530734 1755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.53.1863736aa776e34d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.53,UID:10.0.0.53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.53,},FirstTimestamp:2025-09-08 23:46:21.517939533 +0000 UTC m=+0.711252961,LastTimestamp:2025-09-08 23:46:21.517939533 +0000 UTC m=+0.711252961,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.53,}" Sep 8 23:46:21.536187 kubelet[1755]: I0908 23:46:21.536147 1755 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:46:21.536187 kubelet[1755]: I0908 23:46:21.536175 1755 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:46:21.536187 kubelet[1755]: I0908 23:46:21.536196 1755 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:46:21.606758 kubelet[1755]: I0908 23:46:21.606686 1755 policy_none.go:49] "None policy: Start" Sep 8 23:46:21.606758 kubelet[1755]: I0908 23:46:21.606729 1755 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:46:21.606758 kubelet[1755]: I0908 23:46:21.606744 1755 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:46:21.615108 kubelet[1755]: E0908 23:46:21.615069 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:21.617469 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:46:21.628426 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:46:21.630874 kubelet[1755]: I0908 23:46:21.630816 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 8 23:46:21.632999 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:46:21.633131 kubelet[1755]: I0908 23:46:21.633091 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 8 23:46:21.633131 kubelet[1755]: I0908 23:46:21.633119 1755 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 8 23:46:21.633251 kubelet[1755]: I0908 23:46:21.633136 1755 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:46:21.633251 kubelet[1755]: I0908 23:46:21.633143 1755 kubelet.go:2382] "Starting kubelet main sync loop" Sep 8 23:46:21.633251 kubelet[1755]: E0908 23:46:21.633192 1755 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:46:21.642010 kubelet[1755]: I0908 23:46:21.641962 1755 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 8 23:46:21.642238 kubelet[1755]: I0908 23:46:21.642172 1755 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:46:21.642238 kubelet[1755]: I0908 23:46:21.642188 1755 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:46:21.642604 kubelet[1755]: I0908 23:46:21.642519 1755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:46:21.643262 kubelet[1755]: E0908 23:46:21.643243 1755 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:46:21.643327 kubelet[1755]: E0908 23:46:21.643281 1755 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.53\" not found" Sep 8 23:46:21.730954 kubelet[1755]: E0908 23:46:21.730833 1755 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.53\" not found" node="10.0.0.53" Sep 8 23:46:21.744051 kubelet[1755]: I0908 23:46:21.744019 1755 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.53" Sep 8 23:46:21.749419 kubelet[1755]: I0908 23:46:21.749389 1755 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.53" Sep 8 23:46:21.749419 kubelet[1755]: E0908 23:46:21.749421 1755 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.53\": node \"10.0.0.53\" not found" Sep 8 23:46:21.760255 kubelet[1755]: E0908 23:46:21.760200 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:21.838715 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 8 23:46:21.839937 sshd[1623]: Connection closed by 10.0.0.1 port 36034 Sep 8 23:46:21.840340 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Sep 8 23:46:21.843092 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:36034.service: Deactivated successfully. Sep 8 23:46:21.844974 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:46:21.845158 systemd[1]: session-7.scope: Consumed 391ms CPU time, 75.5M memory peak. Sep 8 23:46:21.846867 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:46:21.847884 systemd-logind[1425]: Removed session 7. Sep 8 23:46:21.860425 kubelet[1755]: E0908 23:46:21.860373 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:21.961004 kubelet[1755]: E0908 23:46:21.960963 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.061571 kubelet[1755]: E0908 23:46:22.061452 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.162123 kubelet[1755]: E0908 23:46:22.162074 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.262746 kubelet[1755]: E0908 23:46:22.262694 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.363410 kubelet[1755]: E0908 23:46:22.363370 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.461369 kubelet[1755]: I0908 23:46:22.461316 1755 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 8 23:46:22.461522 kubelet[1755]: W0908 23:46:22.461496 1755 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 8 23:46:22.464521 kubelet[1755]: E0908 23:46:22.464456 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.503733 kubelet[1755]: E0908 23:46:22.503692 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:22.565268 kubelet[1755]: E0908 23:46:22.565201 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.666238 kubelet[1755]: E0908 23:46:22.666108 1755 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.53\" not found" Sep 8 23:46:22.767900 kubelet[1755]: I0908 23:46:22.767863 1755 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 8 23:46:22.770023 containerd[1439]: time="2025-09-08T23:46:22.769977533Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:46:22.770623 kubelet[1755]: I0908 23:46:22.770452 1755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 8 23:46:23.504502 kubelet[1755]: E0908 23:46:23.504455 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:23.505268 kubelet[1755]: I0908 23:46:23.504917 1755 apiserver.go:52] "Watching apiserver" Sep 8 23:46:23.511814 kubelet[1755]: E0908 23:46:23.511223 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:23.515915 kubelet[1755]: I0908 23:46:23.515869 1755 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:46:23.517515 systemd[1]: Created slice kubepods-besteffort-pod726f7825_b8c9_49b4_8865_6c0f1905a071.slice - libcontainer container kubepods-besteffort-pod726f7825_b8c9_49b4_8865_6c0f1905a071.slice. Sep 8 23:46:23.527544 kubelet[1755]: I0908 23:46:23.527464 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4xnt\" (UniqueName: \"kubernetes.io/projected/17880a9e-28fa-4983-b764-f48424ebdd3b-kube-api-access-c4xnt\") pod \"csi-node-driver-ll9w8\" (UID: \"17880a9e-28fa-4983-b764-f48424ebdd3b\") " pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:23.527544 kubelet[1755]: I0908 23:46:23.527530 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/726f7825-b8c9-49b4-8865-6c0f1905a071-kube-proxy\") pod \"kube-proxy-blxwq\" (UID: \"726f7825-b8c9-49b4-8865-6c0f1905a071\") " pod="kube-system/kube-proxy-blxwq" Sep 8 23:46:23.527544 kubelet[1755]: I0908 23:46:23.527550 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-cni-bin-dir\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527732 kubelet[1755]: I0908 23:46:23.527570 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c33c47f0-2808-4338-86ec-6ba784dc2303-node-certs\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527732 kubelet[1755]: I0908 23:46:23.527587 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c33c47f0-2808-4338-86ec-6ba784dc2303-tigera-ca-bundle\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527732 kubelet[1755]: I0908 23:46:23.527603 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-xtables-lock\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527732 kubelet[1755]: I0908 23:46:23.527617 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17880a9e-28fa-4983-b764-f48424ebdd3b-kubelet-dir\") pod \"csi-node-driver-ll9w8\" (UID: \"17880a9e-28fa-4983-b764-f48424ebdd3b\") " pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:23.527732 kubelet[1755]: I0908 23:46:23.527632 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-cni-log-dir\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527834 kubelet[1755]: I0908 23:46:23.527646 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-cni-net-dir\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527834 kubelet[1755]: I0908 23:46:23.527665 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17880a9e-28fa-4983-b764-f48424ebdd3b-registration-dir\") pod \"csi-node-driver-ll9w8\" (UID: \"17880a9e-28fa-4983-b764-f48424ebdd3b\") " pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:23.527834 kubelet[1755]: I0908 23:46:23.527679 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/726f7825-b8c9-49b4-8865-6c0f1905a071-lib-modules\") pod \"kube-proxy-blxwq\" (UID: \"726f7825-b8c9-49b4-8865-6c0f1905a071\") " pod="kube-system/kube-proxy-blxwq" Sep 8 23:46:23.527834 kubelet[1755]: I0908 23:46:23.527693 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-var-lib-calico\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527834 kubelet[1755]: I0908 23:46:23.527708 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/726f7825-b8c9-49b4-8865-6c0f1905a071-xtables-lock\") pod \"kube-proxy-blxwq\" (UID: \"726f7825-b8c9-49b4-8865-6c0f1905a071\") " pod="kube-system/kube-proxy-blxwq" Sep 8 23:46:23.527959 kubelet[1755]: I0908 23:46:23.527746 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17880a9e-28fa-4983-b764-f48424ebdd3b-socket-dir\") pod \"csi-node-driver-ll9w8\" (UID: \"17880a9e-28fa-4983-b764-f48424ebdd3b\") " pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:23.527959 kubelet[1755]: I0908 23:46:23.527764 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/17880a9e-28fa-4983-b764-f48424ebdd3b-varrun\") pod \"csi-node-driver-ll9w8\" (UID: \"17880a9e-28fa-4983-b764-f48424ebdd3b\") " pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:23.527959 kubelet[1755]: I0908 23:46:23.527808 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp9p7\" (UniqueName: \"kubernetes.io/projected/726f7825-b8c9-49b4-8865-6c0f1905a071-kube-api-access-cp9p7\") pod \"kube-proxy-blxwq\" (UID: \"726f7825-b8c9-49b4-8865-6c0f1905a071\") " pod="kube-system/kube-proxy-blxwq" Sep 8 23:46:23.527959 kubelet[1755]: I0908 23:46:23.527836 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-flexvol-driver-host\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.527959 kubelet[1755]: I0908 23:46:23.527864 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-lib-modules\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.528090 kubelet[1755]: I0908 23:46:23.527989 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-policysync\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.528090 kubelet[1755]: I0908 23:46:23.528009 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c33c47f0-2808-4338-86ec-6ba784dc2303-var-run-calico\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.528090 kubelet[1755]: I0908 23:46:23.528025 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcpd2\" (UniqueName: \"kubernetes.io/projected/c33c47f0-2808-4338-86ec-6ba784dc2303-kube-api-access-bcpd2\") pod \"calico-node-kvjsd\" (UID: \"c33c47f0-2808-4338-86ec-6ba784dc2303\") " pod="calico-system/calico-node-kvjsd" Sep 8 23:46:23.532895 systemd[1]: Created slice kubepods-besteffort-podc33c47f0_2808_4338_86ec_6ba784dc2303.slice - libcontainer container kubepods-besteffort-podc33c47f0_2808_4338_86ec_6ba784dc2303.slice. Sep 8 23:46:23.629657 kubelet[1755]: E0908 23:46:23.629569 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.629657 kubelet[1755]: W0908 23:46:23.629590 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.629657 kubelet[1755]: E0908 23:46:23.629609 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.629950 kubelet[1755]: E0908 23:46:23.629906 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.629950 kubelet[1755]: W0908 23:46:23.629934 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.629950 kubelet[1755]: E0908 23:46:23.629948 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.631966 kubelet[1755]: E0908 23:46:23.631838 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.632335 kubelet[1755]: W0908 23:46:23.632168 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.632335 kubelet[1755]: E0908 23:46:23.632329 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.643596 kubelet[1755]: E0908 23:46:23.643565 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.643596 kubelet[1755]: W0908 23:46:23.643588 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.643748 kubelet[1755]: E0908 23:46:23.643621 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.643880 kubelet[1755]: E0908 23:46:23.643868 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.643909 kubelet[1755]: W0908 23:46:23.643880 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.643909 kubelet[1755]: E0908 23:46:23.643890 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.645667 kubelet[1755]: E0908 23:46:23.645604 1755 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 8 23:46:23.645667 kubelet[1755]: W0908 23:46:23.645620 1755 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 8 23:46:23.645667 kubelet[1755]: E0908 23:46:23.645634 1755 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 8 23:46:23.832738 containerd[1439]: time="2025-09-08T23:46:23.831898733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blxwq,Uid:726f7825-b8c9-49b4-8865-6c0f1905a071,Namespace:kube-system,Attempt:0,}" Sep 8 23:46:23.837105 containerd[1439]: time="2025-09-08T23:46:23.837062413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kvjsd,Uid:c33c47f0-2808-4338-86ec-6ba784dc2303,Namespace:calico-system,Attempt:0,}" Sep 8 23:46:24.383741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50751980.mount: Deactivated successfully. Sep 8 23:46:24.394303 containerd[1439]: time="2025-09-08T23:46:24.394248813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:24.400042 containerd[1439]: time="2025-09-08T23:46:24.399744773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 8 23:46:24.400632 containerd[1439]: time="2025-09-08T23:46:24.400599093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:24.402690 containerd[1439]: time="2025-09-08T23:46:24.401608653Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:24.402690 containerd[1439]: time="2025-09-08T23:46:24.402522733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 8 23:46:24.407299 containerd[1439]: time="2025-09-08T23:46:24.407228773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:46:24.409503 containerd[1439]: time="2025-09-08T23:46:24.409254493Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 572.10548ms" Sep 8 23:46:24.410116 containerd[1439]: time="2025-09-08T23:46:24.410081173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.08464ms" Sep 8 23:46:24.505348 kubelet[1755]: E0908 23:46:24.505299 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:24.516961 containerd[1439]: time="2025-09-08T23:46:24.515195813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:24.516961 containerd[1439]: time="2025-09-08T23:46:24.515275133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:24.516961 containerd[1439]: time="2025-09-08T23:46:24.515291293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:24.518827 containerd[1439]: time="2025-09-08T23:46:24.515383773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:24.521588 containerd[1439]: time="2025-09-08T23:46:24.521487693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:24.521588 containerd[1439]: time="2025-09-08T23:46:24.521544173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:24.521588 containerd[1439]: time="2025-09-08T23:46:24.521560093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:24.522860 containerd[1439]: time="2025-09-08T23:46:24.522769093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:24.599146 systemd[1]: Started cri-containerd-12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31.scope - libcontainer container 12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31. Sep 8 23:46:24.600713 systemd[1]: Started cri-containerd-c3a258a3778d0941e79f183703279df189165ba4612026ed4df92f0beee565ee.scope - libcontainer container c3a258a3778d0941e79f183703279df189165ba4612026ed4df92f0beee565ee. Sep 8 23:46:24.628454 containerd[1439]: time="2025-09-08T23:46:24.627780253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kvjsd,Uid:c33c47f0-2808-4338-86ec-6ba784dc2303,Namespace:calico-system,Attempt:0,} returns sandbox id \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\"" Sep 8 23:46:24.631285 containerd[1439]: time="2025-09-08T23:46:24.631129173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 8 23:46:24.632000 containerd[1439]: time="2025-09-08T23:46:24.631838093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-blxwq,Uid:726f7825-b8c9-49b4-8865-6c0f1905a071,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3a258a3778d0941e79f183703279df189165ba4612026ed4df92f0beee565ee\"" Sep 8 23:46:25.506475 kubelet[1755]: E0908 23:46:25.506420 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:25.634006 kubelet[1755]: E0908 23:46:25.633969 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:25.660472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2093554269.mount: Deactivated successfully. Sep 8 23:46:25.715170 containerd[1439]: time="2025-09-08T23:46:25.715124973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:25.715737 containerd[1439]: time="2025-09-08T23:46:25.715674813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5636193" Sep 8 23:46:25.716255 containerd[1439]: time="2025-09-08T23:46:25.716231613Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:25.718127 containerd[1439]: time="2025-09-08T23:46:25.718095493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:25.718902 containerd[1439]: time="2025-09-08T23:46:25.718861173Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.08767288s" Sep 8 23:46:25.718902 containerd[1439]: time="2025-09-08T23:46:25.718895493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 8 23:46:25.720111 containerd[1439]: time="2025-09-08T23:46:25.719893253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 8 23:46:25.721244 containerd[1439]: time="2025-09-08T23:46:25.721189573Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 8 23:46:25.735522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956176464.mount: Deactivated successfully. Sep 8 23:46:25.739983 containerd[1439]: time="2025-09-08T23:46:25.739937293Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580\"" Sep 8 23:46:25.740757 containerd[1439]: time="2025-09-08T23:46:25.740685653Z" level=info msg="StartContainer for \"d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580\"" Sep 8 23:46:25.771118 systemd[1]: Started cri-containerd-d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580.scope - libcontainer container d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580. Sep 8 23:46:25.801151 containerd[1439]: time="2025-09-08T23:46:25.801096613Z" level=info msg="StartContainer for \"d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580\" returns successfully" Sep 8 23:46:25.810452 systemd[1]: cri-containerd-d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580.scope: Deactivated successfully. Sep 8 23:46:25.855946 containerd[1439]: time="2025-09-08T23:46:25.855871013Z" level=info msg="shim disconnected" id=d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580 namespace=k8s.io Sep 8 23:46:25.855946 containerd[1439]: time="2025-09-08T23:46:25.855938013Z" level=warning msg="cleaning up after shim disconnected" id=d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580 namespace=k8s.io Sep 8 23:46:25.855946 containerd[1439]: time="2025-09-08T23:46:25.855946973Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:46:25.868856 containerd[1439]: time="2025-09-08T23:46:25.868807453Z" level=warning msg="cleanup warnings time=\"2025-09-08T23:46:25Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 8 23:46:26.507642 kubelet[1755]: E0908 23:46:26.507153 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:26.640517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d07acdcfe98dd12a9e6fa1bb62a1eb09031510ae2c340cd78e8b878f6c777580-rootfs.mount: Deactivated successfully. Sep 8 23:46:26.723812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287132431.mount: Deactivated successfully. Sep 8 23:46:26.953562 containerd[1439]: time="2025-09-08T23:46:26.953517213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:26.954444 containerd[1439]: time="2025-09-08T23:46:26.954399933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Sep 8 23:46:26.956225 containerd[1439]: time="2025-09-08T23:46:26.955098893Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:26.957369 containerd[1439]: time="2025-09-08T23:46:26.957332453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:26.958123 containerd[1439]: time="2025-09-08T23:46:26.958096133Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.23817204s" Sep 8 23:46:26.958236 containerd[1439]: time="2025-09-08T23:46:26.958218773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 8 23:46:26.959267 containerd[1439]: time="2025-09-08T23:46:26.959246053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 8 23:46:26.961224 containerd[1439]: time="2025-09-08T23:46:26.961185053Z" level=info msg="CreateContainer within sandbox \"c3a258a3778d0941e79f183703279df189165ba4612026ed4df92f0beee565ee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:46:26.974706 containerd[1439]: time="2025-09-08T23:46:26.974656133Z" level=info msg="CreateContainer within sandbox \"c3a258a3778d0941e79f183703279df189165ba4612026ed4df92f0beee565ee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"91bd1fb9daee84a452cefa64790379bd834d430562a495f5f3216546fcfdf8a4\"" Sep 8 23:46:26.976562 containerd[1439]: time="2025-09-08T23:46:26.975143733Z" level=info msg="StartContainer for \"91bd1fb9daee84a452cefa64790379bd834d430562a495f5f3216546fcfdf8a4\"" Sep 8 23:46:27.008131 systemd[1]: Started cri-containerd-91bd1fb9daee84a452cefa64790379bd834d430562a495f5f3216546fcfdf8a4.scope - libcontainer container 91bd1fb9daee84a452cefa64790379bd834d430562a495f5f3216546fcfdf8a4. Sep 8 23:46:27.032703 containerd[1439]: time="2025-09-08T23:46:27.032604973Z" level=info msg="StartContainer for \"91bd1fb9daee84a452cefa64790379bd834d430562a495f5f3216546fcfdf8a4\" returns successfully" Sep 8 23:46:27.507519 kubelet[1755]: E0908 23:46:27.507467 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:27.633982 kubelet[1755]: E0908 23:46:27.633507 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:28.507818 kubelet[1755]: E0908 23:46:28.507783 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:28.829103 containerd[1439]: time="2025-09-08T23:46:28.828989693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 8 23:46:28.829103 containerd[1439]: time="2025-09-08T23:46:28.829097893Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:28.831957 containerd[1439]: time="2025-09-08T23:46:28.831891933Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:28.833211 containerd[1439]: time="2025-09-08T23:46:28.833178573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.87383352s" Sep 8 23:46:28.833272 containerd[1439]: time="2025-09-08T23:46:28.833215333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 8 23:46:28.834195 containerd[1439]: time="2025-09-08T23:46:28.833833173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:28.835331 containerd[1439]: time="2025-09-08T23:46:28.835301733Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 8 23:46:28.846559 containerd[1439]: time="2025-09-08T23:46:28.846522973Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658\"" Sep 8 23:46:28.848240 containerd[1439]: time="2025-09-08T23:46:28.846930133Z" level=info msg="StartContainer for \"1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658\"" Sep 8 23:46:28.879107 systemd[1]: Started cri-containerd-1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658.scope - libcontainer container 1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658. Sep 8 23:46:28.902901 containerd[1439]: time="2025-09-08T23:46:28.902857453Z" level=info msg="StartContainer for \"1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658\" returns successfully" Sep 8 23:46:29.401093 containerd[1439]: time="2025-09-08T23:46:29.401049173Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:46:29.403004 systemd[1]: cri-containerd-1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658.scope: Deactivated successfully. Sep 8 23:46:29.403343 systemd[1]: cri-containerd-1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658.scope: Consumed 459ms CPU time, 185.8M memory peak, 165.8M written to disk. Sep 8 23:46:29.417704 kubelet[1755]: I0908 23:46:29.417675 1755 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:46:29.422426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658-rootfs.mount: Deactivated successfully. Sep 8 23:46:29.508218 kubelet[1755]: E0908 23:46:29.508162 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:29.587240 containerd[1439]: time="2025-09-08T23:46:29.587171773Z" level=info msg="shim disconnected" id=1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658 namespace=k8s.io Sep 8 23:46:29.587240 containerd[1439]: time="2025-09-08T23:46:29.587234653Z" level=warning msg="cleaning up after shim disconnected" id=1bba868e031dc1c4b08297f63736517347ec008cd434b770025cbfa622406658 namespace=k8s.io Sep 8 23:46:29.587240 containerd[1439]: time="2025-09-08T23:46:29.587244653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:46:29.639346 systemd[1]: Created slice kubepods-besteffort-pod17880a9e_28fa_4983_b764_f48424ebdd3b.slice - libcontainer container kubepods-besteffort-pod17880a9e_28fa_4983_b764_f48424ebdd3b.slice. Sep 8 23:46:29.641167 containerd[1439]: time="2025-09-08T23:46:29.641126413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:0,}" Sep 8 23:46:29.658327 containerd[1439]: time="2025-09-08T23:46:29.658206813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 8 23:46:29.678549 kubelet[1755]: I0908 23:46:29.678350 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-blxwq" podStartSLOduration=6.357188653 podStartE2EDuration="8.678328053s" podCreationTimestamp="2025-09-08 23:46:21 +0000 UTC" firstStartedPulling="2025-09-08 23:46:24.637961733 +0000 UTC m=+3.831275161" lastFinishedPulling="2025-09-08 23:46:26.959101133 +0000 UTC m=+6.152414561" observedRunningTime="2025-09-08 23:46:27.662950213 +0000 UTC m=+6.856263641" watchObservedRunningTime="2025-09-08 23:46:29.678328053 +0000 UTC m=+8.871641441" Sep 8 23:46:29.709081 containerd[1439]: time="2025-09-08T23:46:29.709035693Z" level=error msg="Failed to destroy network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:29.709383 containerd[1439]: time="2025-09-08T23:46:29.709356333Z" level=error msg="encountered an error cleaning up failed sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:29.709441 containerd[1439]: time="2025-09-08T23:46:29.709419693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:29.709803 kubelet[1755]: E0908 23:46:29.709637 1755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:29.709803 kubelet[1755]: E0908 23:46:29.709732 1755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:29.709803 kubelet[1755]: E0908 23:46:29.709750 1755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:29.710200 kubelet[1755]: E0908 23:46:29.710147 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:30.509161 kubelet[1755]: E0908 23:46:30.509107 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:30.661869 kubelet[1755]: I0908 23:46:30.661809 1755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5" Sep 8 23:46:30.664277 containerd[1439]: time="2025-09-08T23:46:30.664227013Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\"" Sep 8 23:46:30.666295 containerd[1439]: time="2025-09-08T23:46:30.664415173Z" level=info msg="Ensure that sandbox d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5 in task-service has been cleanup successfully" Sep 8 23:46:30.666451 containerd[1439]: time="2025-09-08T23:46:30.666415453Z" level=info msg="TearDown network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" successfully" Sep 8 23:46:30.666485 containerd[1439]: time="2025-09-08T23:46:30.666449613Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" returns successfully" Sep 8 23:46:30.668102 systemd[1]: run-netns-cni\x2d09c46288\x2d7100\x2d167f\x2da0a5\x2dfee698e3386c.mount: Deactivated successfully. Sep 8 23:46:30.674269 containerd[1439]: time="2025-09-08T23:46:30.669833253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:1,}" Sep 8 23:46:30.743507 containerd[1439]: time="2025-09-08T23:46:30.743458853Z" level=error msg="Failed to destroy network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:30.745174 containerd[1439]: time="2025-09-08T23:46:30.745123133Z" level=error msg="encountered an error cleaning up failed sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:30.745248 containerd[1439]: time="2025-09-08T23:46:30.745202213Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:30.745800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12-shm.mount: Deactivated successfully. Sep 8 23:46:30.747587 kubelet[1755]: E0908 23:46:30.746595 1755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:30.747587 kubelet[1755]: E0908 23:46:30.746661 1755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:30.747587 kubelet[1755]: E0908 23:46:30.746680 1755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:30.748707 kubelet[1755]: E0908 23:46:30.746724 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:31.510277 kubelet[1755]: E0908 23:46:31.510214 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:31.665003 kubelet[1755]: I0908 23:46:31.664627 1755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12" Sep 8 23:46:31.665856 containerd[1439]: time="2025-09-08T23:46:31.665653893Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\"" Sep 8 23:46:31.665856 containerd[1439]: time="2025-09-08T23:46:31.665827733Z" level=info msg="Ensure that sandbox 87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12 in task-service has been cleanup successfully" Sep 8 23:46:31.668421 containerd[1439]: time="2025-09-08T23:46:31.668201253Z" level=info msg="TearDown network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" successfully" Sep 8 23:46:31.668421 containerd[1439]: time="2025-09-08T23:46:31.668230773Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" returns successfully" Sep 8 23:46:31.667447 systemd[1]: run-netns-cni\x2d651d1c78\x2d501e\x2d822e\x2d1b2e\x2d7d17d2ac68da.mount: Deactivated successfully. Sep 8 23:46:31.669035 containerd[1439]: time="2025-09-08T23:46:31.669007493Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\"" Sep 8 23:46:31.669226 containerd[1439]: time="2025-09-08T23:46:31.669106213Z" level=info msg="TearDown network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" successfully" Sep 8 23:46:31.669226 containerd[1439]: time="2025-09-08T23:46:31.669119053Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" returns successfully" Sep 8 23:46:31.669576 containerd[1439]: time="2025-09-08T23:46:31.669534733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:2,}" Sep 8 23:46:31.736901 containerd[1439]: time="2025-09-08T23:46:31.736829813Z" level=error msg="Failed to destroy network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:31.737467 containerd[1439]: time="2025-09-08T23:46:31.737413973Z" level=error msg="encountered an error cleaning up failed sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:31.737526 containerd[1439]: time="2025-09-08T23:46:31.737493653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:31.737968 kubelet[1755]: E0908 23:46:31.737767 1755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:31.737968 kubelet[1755]: E0908 23:46:31.737829 1755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:31.737968 kubelet[1755]: E0908 23:46:31.737861 1755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:31.738061 kubelet[1755]: E0908 23:46:31.737908 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:31.739046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595-shm.mount: Deactivated successfully. Sep 8 23:46:32.510391 kubelet[1755]: E0908 23:46:32.510329 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:32.668279 kubelet[1755]: I0908 23:46:32.667815 1755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595" Sep 8 23:46:32.668982 containerd[1439]: time="2025-09-08T23:46:32.668661573Z" level=info msg="StopPodSandbox for \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\"" Sep 8 23:46:32.668982 containerd[1439]: time="2025-09-08T23:46:32.668833853Z" level=info msg="Ensure that sandbox 183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595 in task-service has been cleanup successfully" Sep 8 23:46:32.669482 containerd[1439]: time="2025-09-08T23:46:32.669370013Z" level=info msg="TearDown network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\" successfully" Sep 8 23:46:32.669482 containerd[1439]: time="2025-09-08T23:46:32.669397733Z" level=info msg="StopPodSandbox for \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\" returns successfully" Sep 8 23:46:32.670147 containerd[1439]: time="2025-09-08T23:46:32.670122333Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\"" Sep 8 23:46:32.670433 containerd[1439]: time="2025-09-08T23:46:32.670351013Z" level=info msg="TearDown network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" successfully" Sep 8 23:46:32.670433 containerd[1439]: time="2025-09-08T23:46:32.670365853Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" returns successfully" Sep 8 23:46:32.670763 systemd[1]: run-netns-cni\x2da47f4525\x2d1c08\x2dddcc\x2d71b7\x2d4d6b4557ad2c.mount: Deactivated successfully. Sep 8 23:46:32.671397 containerd[1439]: time="2025-09-08T23:46:32.671225333Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\"" Sep 8 23:46:32.671397 containerd[1439]: time="2025-09-08T23:46:32.671318213Z" level=info msg="TearDown network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" successfully" Sep 8 23:46:32.671397 containerd[1439]: time="2025-09-08T23:46:32.671328653Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" returns successfully" Sep 8 23:46:32.671853 containerd[1439]: time="2025-09-08T23:46:32.671824133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:3,}" Sep 8 23:46:32.752620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392848259.mount: Deactivated successfully. Sep 8 23:46:32.844193 containerd[1439]: time="2025-09-08T23:46:32.844137973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:32.848436 containerd[1439]: time="2025-09-08T23:46:32.848371493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 8 23:46:32.852438 containerd[1439]: time="2025-09-08T23:46:32.850607133Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:32.853165 containerd[1439]: time="2025-09-08T23:46:32.853099533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:32.854367 containerd[1439]: time="2025-09-08T23:46:32.854325133Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.19607908s" Sep 8 23:46:32.854440 containerd[1439]: time="2025-09-08T23:46:32.854362733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 8 23:46:32.864131 containerd[1439]: time="2025-09-08T23:46:32.863900733Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 8 23:46:32.885478 containerd[1439]: time="2025-09-08T23:46:32.885274853Z" level=info msg="CreateContainer within sandbox \"12f4cecbc76c7c0f71566de24bad01b5f0f7f85287419159c565492aa4c5cf31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8\"" Sep 8 23:46:32.886171 containerd[1439]: time="2025-09-08T23:46:32.886145213Z" level=info msg="StartContainer for \"f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8\"" Sep 8 23:46:32.907943 containerd[1439]: time="2025-09-08T23:46:32.907890693Z" level=error msg="Failed to destroy network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:32.909058 containerd[1439]: time="2025-09-08T23:46:32.908329293Z" level=error msg="encountered an error cleaning up failed sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:32.909058 containerd[1439]: time="2025-09-08T23:46:32.909004773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:32.909367 kubelet[1755]: E0908 23:46:32.909331 1755 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 8 23:46:32.909416 kubelet[1755]: E0908 23:46:32.909385 1755 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:32.909416 kubelet[1755]: E0908 23:46:32.909404 1755 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-ll9w8" Sep 8 23:46:32.909489 kubelet[1755]: E0908 23:46:32.909440 1755 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-ll9w8_calico-system(17880a9e-28fa-4983-b764-f48424ebdd3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-ll9w8" podUID="17880a9e-28fa-4983-b764-f48424ebdd3b" Sep 8 23:46:32.915163 systemd[1]: Started cri-containerd-f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8.scope - libcontainer container f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8. Sep 8 23:46:32.946822 containerd[1439]: time="2025-09-08T23:46:32.946773933Z" level=info msg="StartContainer for \"f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8\" returns successfully" Sep 8 23:46:33.061631 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 8 23:46:33.061887 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 8 23:46:33.244791 systemd[1]: Created slice kubepods-besteffort-pode130e2c7_bc46_45ee_b4ae_75934071745b.slice - libcontainer container kubepods-besteffort-pode130e2c7_bc46_45ee_b4ae_75934071745b.slice. Sep 8 23:46:33.290141 kubelet[1755]: I0908 23:46:33.290106 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgtxs\" (UniqueName: \"kubernetes.io/projected/e130e2c7-bc46-45ee-b4ae-75934071745b-kube-api-access-wgtxs\") pod \"nginx-deployment-7fcdb87857-7hrhk\" (UID: \"e130e2c7-bc46-45ee-b4ae-75934071745b\") " pod="default/nginx-deployment-7fcdb87857-7hrhk" Sep 8 23:46:33.510996 kubelet[1755]: E0908 23:46:33.510712 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:33.548499 containerd[1439]: time="2025-09-08T23:46:33.548464133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7hrhk,Uid:e130e2c7-bc46-45ee-b4ae-75934071745b,Namespace:default,Attempt:0,}" Sep 8 23:46:33.676123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d-shm.mount: Deactivated successfully. Sep 8 23:46:33.679504 kubelet[1755]: I0908 23:46:33.679466 1755 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d" Sep 8 23:46:33.680305 containerd[1439]: time="2025-09-08T23:46:33.680271773Z" level=info msg="StopPodSandbox for \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\"" Sep 8 23:46:33.680762 containerd[1439]: time="2025-09-08T23:46:33.680735973Z" level=info msg="Ensure that sandbox a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d in task-service has been cleanup successfully" Sep 8 23:46:33.681036 containerd[1439]: time="2025-09-08T23:46:33.680999253Z" level=info msg="TearDown network for sandbox \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\" successfully" Sep 8 23:46:33.681115 containerd[1439]: time="2025-09-08T23:46:33.681100253Z" level=info msg="StopPodSandbox for \"a22a819f6c7c3444d74a99658505040453d60bf2ea7fb0baa179b66f4dd5602d\" returns successfully" Sep 8 23:46:33.681673 containerd[1439]: time="2025-09-08T23:46:33.681643533Z" level=info msg="StopPodSandbox for \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\"" Sep 8 23:46:33.681744 containerd[1439]: time="2025-09-08T23:46:33.681725093Z" level=info msg="TearDown network for sandbox \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\" successfully" Sep 8 23:46:33.681744 containerd[1439]: time="2025-09-08T23:46:33.681738653Z" level=info msg="StopPodSandbox for \"183228f02d16ad06c6855d53fcaf4ac127655c3739ceed6b6e54afa738943595\" returns successfully" Sep 8 23:46:33.682147 containerd[1439]: time="2025-09-08T23:46:33.682123493Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\"" Sep 8 23:46:33.682686 systemd[1]: run-netns-cni\x2d167aee29\x2dfce1\x2dff99\x2d4d9d\x2d433cb69f5db6.mount: Deactivated successfully. Sep 8 23:46:33.682966 containerd[1439]: time="2025-09-08T23:46:33.682944493Z" level=info msg="TearDown network for sandbox \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" successfully" Sep 8 23:46:33.683043 containerd[1439]: time="2025-09-08T23:46:33.683030213Z" level=info msg="StopPodSandbox for \"87f95fb0bd44f66b7aabebe060e91aec4f369935935805ea7edf9910f6364b12\" returns successfully" Sep 8 23:46:33.684258 containerd[1439]: time="2025-09-08T23:46:33.684224933Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\"" Sep 8 23:46:33.684339 containerd[1439]: time="2025-09-08T23:46:33.684321293Z" level=info msg="TearDown network for sandbox \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" successfully" Sep 8 23:46:33.684339 containerd[1439]: time="2025-09-08T23:46:33.684334773Z" level=info msg="StopPodSandbox for \"d293bd9eb84623a89dd7343b9538c5849258894201daaf567a2c7f0419d725b5\" returns successfully" Sep 8 23:46:33.684877 containerd[1439]: time="2025-09-08T23:46:33.684840413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:4,}" Sep 8 23:46:33.694204 kubelet[1755]: I0908 23:46:33.694136 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kvjsd" podStartSLOduration=4.469559213 podStartE2EDuration="12.694119533s" podCreationTimestamp="2025-09-08 23:46:21 +0000 UTC" firstStartedPulling="2025-09-08 23:46:24.630475733 +0000 UTC m=+3.823789161" lastFinishedPulling="2025-09-08 23:46:32.855036053 +0000 UTC m=+12.048349481" observedRunningTime="2025-09-08 23:46:33.693914173 +0000 UTC m=+12.887227601" watchObservedRunningTime="2025-09-08 23:46:33.694119533 +0000 UTC m=+12.887432961" Sep 8 23:46:33.696099 systemd-networkd[1362]: cali5a99d3fb7e3: Link UP Sep 8 23:46:33.696310 systemd-networkd[1362]: cali5a99d3fb7e3: Gained carrier Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.580 [INFO][2418] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.598 [INFO][2418] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0 nginx-deployment-7fcdb87857- default e130e2c7-bc46-45ee-b4ae-75934071745b 1189 0 2025-09-08 23:46:33 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 nginx-deployment-7fcdb87857-7hrhk eth0 default [] [] [kns.default ksa.default.default] cali5a99d3fb7e3 [] [] }} ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.598 [INFO][2418] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.641 [INFO][2431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" HandleID="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Workload="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.642 [INFO][2431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" HandleID="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Workload="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322fb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nginx-deployment-7fcdb87857-7hrhk", "timestamp":"2025-09-08 23:46:33.641808773 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.642 [INFO][2431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.642 [INFO][2431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.642 [INFO][2431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.653 [INFO][2431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.659 [INFO][2431] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.664 [INFO][2431] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.666 [INFO][2431] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.669 [INFO][2431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.669 [INFO][2431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.671 [INFO][2431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35 Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.675 [INFO][2431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.683 [INFO][2431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.193/26] block=192.168.100.192/26 handle="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.683 [INFO][2431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.193/26] handle="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" host="10.0.0.53" Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.683 [INFO][2431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 8 23:46:33.712225 containerd[1439]: 2025-09-08 23:46:33.683 [INFO][2431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.193/26] IPv6=[] ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" HandleID="k8s-pod-network.3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Workload="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.688 [INFO][2418] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e130e2c7-bc46-45ee-b4ae-75934071745b", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-7hrhk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a99d3fb7e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.688 [INFO][2418] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.193/32] ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.688 [INFO][2418] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a99d3fb7e3 ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.695 [INFO][2418] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.697 [INFO][2418] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"e130e2c7-bc46-45ee-b4ae-75934071745b", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35", Pod:"nginx-deployment-7fcdb87857-7hrhk", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5a99d3fb7e3", MAC:"b6:c1:48:ea:8b:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:33.713592 containerd[1439]: 2025-09-08 23:46:33.706 [INFO][2418] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35" Namespace="default" Pod="nginx-deployment-7fcdb87857-7hrhk" WorkloadEndpoint="10.0.0.53-k8s-nginx--deployment--7fcdb87857--7hrhk-eth0" Sep 8 23:46:33.735691 containerd[1439]: time="2025-09-08T23:46:33.735373493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:33.735691 containerd[1439]: time="2025-09-08T23:46:33.735447653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:33.735691 containerd[1439]: time="2025-09-08T23:46:33.735460173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:33.741989 containerd[1439]: time="2025-09-08T23:46:33.740430213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:33.767237 systemd[1]: Started cri-containerd-3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35.scope - libcontainer container 3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35. Sep 8 23:46:33.781043 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:46:33.806467 containerd[1439]: time="2025-09-08T23:46:33.806427493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-7hrhk,Uid:e130e2c7-bc46-45ee-b4ae-75934071745b,Namespace:default,Attempt:0,} returns sandbox id \"3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35\"" Sep 8 23:46:33.808038 containerd[1439]: time="2025-09-08T23:46:33.808003613Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 8 23:46:33.827550 systemd-networkd[1362]: cali6993a20bf2d: Link UP Sep 8 23:46:33.827728 systemd-networkd[1362]: cali6993a20bf2d: Gained carrier Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.726 [INFO][2457] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.742 [INFO][2457] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-csi--node--driver--ll9w8-eth0 csi-node-driver- calico-system 17880a9e-28fa-4983-b764-f48424ebdd3b 1089 0 2025-09-08 23:46:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.53 csi-node-driver-ll9w8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6993a20bf2d [] [] }} ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.742 [INFO][2457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.777 [INFO][2511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" HandleID="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Workload="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.778 [INFO][2511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" HandleID="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Workload="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.53", "pod":"csi-node-driver-ll9w8", "timestamp":"2025-09-08 23:46:33.777918173 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.778 [INFO][2511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.778 [INFO][2511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.778 [INFO][2511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.789 [INFO][2511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.798 [INFO][2511] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.804 [INFO][2511] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.807 [INFO][2511] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.810 [INFO][2511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.810 [INFO][2511] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.813 [INFO][2511] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54 Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.818 [INFO][2511] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.824 [INFO][2511] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.194/26] block=192.168.100.192/26 handle="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.824 [INFO][2511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.194/26] handle="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" host="10.0.0.53" Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.824 [INFO][2511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 8 23:46:33.841414 containerd[1439]: 2025-09-08 23:46:33.824 [INFO][2511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.194/26] IPv6=[] ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" HandleID="k8s-pod-network.545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Workload="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.826 [INFO][2457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--ll9w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17880a9e-28fa-4983-b764-f48424ebdd3b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"csi-node-driver-ll9w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6993a20bf2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.826 [INFO][2457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.194/32] ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.826 [INFO][2457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6993a20bf2d ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.827 [INFO][2457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.828 [INFO][2457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-csi--node--driver--ll9w8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17880a9e-28fa-4983-b764-f48424ebdd3b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54", Pod:"csi-node-driver-ll9w8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.100.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6993a20bf2d", MAC:"96:9c:aa:7f:a5:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:33.842093 containerd[1439]: 2025-09-08 23:46:33.839 [INFO][2457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54" Namespace="calico-system" Pod="csi-node-driver-ll9w8" WorkloadEndpoint="10.0.0.53-k8s-csi--node--driver--ll9w8-eth0" Sep 8 23:46:33.856405 containerd[1439]: time="2025-09-08T23:46:33.856249493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:33.856876 containerd[1439]: time="2025-09-08T23:46:33.856690173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:33.856876 containerd[1439]: time="2025-09-08T23:46:33.856709053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:33.856876 containerd[1439]: time="2025-09-08T23:46:33.856804653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:33.876186 systemd[1]: Started cri-containerd-545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54.scope - libcontainer container 545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54. Sep 8 23:46:33.887823 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:46:33.900597 containerd[1439]: time="2025-09-08T23:46:33.900561573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ll9w8,Uid:17880a9e-28fa-4983-b764-f48424ebdd3b,Namespace:calico-system,Attempt:4,} returns sandbox id \"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54\"" Sep 8 23:46:34.511139 kubelet[1755]: E0908 23:46:34.511099 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:34.595958 kernel: bpftool[2710]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 8 23:46:34.703628 systemd[1]: run-containerd-runc-k8s.io-f061e4ef395f08e6fd4238fea19c3cbae832b2a3fa7a8887accbce4b422ebae8-runc.6wki7o.mount: Deactivated successfully. Sep 8 23:46:34.800179 systemd-networkd[1362]: vxlan.calico: Link UP Sep 8 23:46:34.800186 systemd-networkd[1362]: vxlan.calico: Gained carrier Sep 8 23:46:35.113381 systemd-networkd[1362]: cali5a99d3fb7e3: Gained IPv6LL Sep 8 23:46:35.432759 systemd-networkd[1362]: cali6993a20bf2d: Gained IPv6LL Sep 8 23:46:35.512152 kubelet[1755]: E0908 23:46:35.512064 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:35.715695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295687787.mount: Deactivated successfully. Sep 8 23:46:35.880052 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Sep 8 23:46:36.512419 kubelet[1755]: E0908 23:46:36.512368 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:36.570244 containerd[1439]: time="2025-09-08T23:46:36.569544093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:36.571349 containerd[1439]: time="2025-09-08T23:46:36.571245853Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69986522" Sep 8 23:46:36.572630 containerd[1439]: time="2025-09-08T23:46:36.572602893Z" level=info msg="ImageCreate event name:\"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:36.575506 containerd[1439]: time="2025-09-08T23:46:36.575460733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:36.576689 containerd[1439]: time="2025-09-08T23:46:36.576658093Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 2.76861288s" Sep 8 23:46:36.576743 containerd[1439]: time="2025-09-08T23:46:36.576697933Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 8 23:46:36.577909 containerd[1439]: time="2025-09-08T23:46:36.577879933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 8 23:46:36.578841 containerd[1439]: time="2025-09-08T23:46:36.578814213Z" level=info msg="CreateContainer within sandbox \"3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 8 23:46:36.593065 containerd[1439]: time="2025-09-08T23:46:36.592977573Z" level=info msg="CreateContainer within sandbox \"3eaaf2b1b0a06fe88c9f04cf6fc085dee997f0c74824d360fce189f520fd3d35\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"018b7dfdae251abea2f71b205ab55fe2944a0c4a142c7095e57975f5c1a45967\"" Sep 8 23:46:36.593756 containerd[1439]: time="2025-09-08T23:46:36.593626453Z" level=info msg="StartContainer for \"018b7dfdae251abea2f71b205ab55fe2944a0c4a142c7095e57975f5c1a45967\"" Sep 8 23:46:36.675151 systemd[1]: Started cri-containerd-018b7dfdae251abea2f71b205ab55fe2944a0c4a142c7095e57975f5c1a45967.scope - libcontainer container 018b7dfdae251abea2f71b205ab55fe2944a0c4a142c7095e57975f5c1a45967. Sep 8 23:46:36.763681 containerd[1439]: time="2025-09-08T23:46:36.763545493Z" level=info msg="StartContainer for \"018b7dfdae251abea2f71b205ab55fe2944a0c4a142c7095e57975f5c1a45967\" returns successfully" Sep 8 23:46:37.449453 containerd[1439]: time="2025-09-08T23:46:37.449384213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.451091 containerd[1439]: time="2025-09-08T23:46:37.451037853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 8 23:46:37.452077 containerd[1439]: time="2025-09-08T23:46:37.452023933Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.454080 containerd[1439]: time="2025-09-08T23:46:37.454040613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:37.454735 containerd[1439]: time="2025-09-08T23:46:37.454704493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 876.78628ms" Sep 8 23:46:37.454784 containerd[1439]: time="2025-09-08T23:46:37.454741693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 8 23:46:37.456855 containerd[1439]: time="2025-09-08T23:46:37.456822573Z" level=info msg="CreateContainer within sandbox \"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 8 23:46:37.488415 containerd[1439]: time="2025-09-08T23:46:37.488323453Z" level=info msg="CreateContainer within sandbox \"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b46fecfdd2c38c505307677d933940ac252ed0de9989bac492690638de36ba05\"" Sep 8 23:46:37.489144 containerd[1439]: time="2025-09-08T23:46:37.489110973Z" level=info msg="StartContainer for \"b46fecfdd2c38c505307677d933940ac252ed0de9989bac492690638de36ba05\"" Sep 8 23:46:37.512890 kubelet[1755]: E0908 23:46:37.512829 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:37.524138 systemd[1]: Started cri-containerd-b46fecfdd2c38c505307677d933940ac252ed0de9989bac492690638de36ba05.scope - libcontainer container b46fecfdd2c38c505307677d933940ac252ed0de9989bac492690638de36ba05. Sep 8 23:46:37.554757 containerd[1439]: time="2025-09-08T23:46:37.554693293Z" level=info msg="StartContainer for \"b46fecfdd2c38c505307677d933940ac252ed0de9989bac492690638de36ba05\" returns successfully" Sep 8 23:46:37.556894 containerd[1439]: time="2025-09-08T23:46:37.556842133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 8 23:46:38.513899 kubelet[1755]: E0908 23:46:38.513862 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:38.649297 containerd[1439]: time="2025-09-08T23:46:38.649245453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.650613 containerd[1439]: time="2025-09-08T23:46:38.650424053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 8 23:46:38.651458 containerd[1439]: time="2025-09-08T23:46:38.651422533Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.655035 containerd[1439]: time="2025-09-08T23:46:38.654965973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:38.655799 containerd[1439]: time="2025-09-08T23:46:38.655762253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.09888224s" Sep 8 23:46:38.655799 containerd[1439]: time="2025-09-08T23:46:38.655797613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 8 23:46:38.658485 containerd[1439]: time="2025-09-08T23:46:38.658436133Z" level=info msg="CreateContainer within sandbox \"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 8 23:46:38.675542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430261955.mount: Deactivated successfully. Sep 8 23:46:38.690693 containerd[1439]: time="2025-09-08T23:46:38.690639653Z" level=info msg="CreateContainer within sandbox \"545ee6afcb8a28c99fd8158c73c27e54f78414b3983fcd13db56e4523275ac54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"dccfe6d24a54124fffaba38688592257c3e87bc92c45fcbba72f358352f4b970\"" Sep 8 23:46:38.691539 containerd[1439]: time="2025-09-08T23:46:38.691454613Z" level=info msg="StartContainer for \"dccfe6d24a54124fffaba38688592257c3e87bc92c45fcbba72f358352f4b970\"" Sep 8 23:46:38.726491 systemd[1]: Started cri-containerd-dccfe6d24a54124fffaba38688592257c3e87bc92c45fcbba72f358352f4b970.scope - libcontainer container dccfe6d24a54124fffaba38688592257c3e87bc92c45fcbba72f358352f4b970. Sep 8 23:46:38.761861 containerd[1439]: time="2025-09-08T23:46:38.761787093Z" level=info msg="StartContainer for \"dccfe6d24a54124fffaba38688592257c3e87bc92c45fcbba72f358352f4b970\" returns successfully" Sep 8 23:46:39.514867 kubelet[1755]: E0908 23:46:39.514802 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:39.660455 kubelet[1755]: I0908 23:46:39.660398 1755 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 8 23:46:39.660455 kubelet[1755]: I0908 23:46:39.660462 1755 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 8 23:46:39.753558 kubelet[1755]: I0908 23:46:39.753456 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-7hrhk" podStartSLOduration=3.983175893 podStartE2EDuration="6.753435413s" podCreationTimestamp="2025-09-08 23:46:33 +0000 UTC" firstStartedPulling="2025-09-08 23:46:33.807474573 +0000 UTC m=+13.000787961" lastFinishedPulling="2025-09-08 23:46:36.577734053 +0000 UTC m=+15.771047481" observedRunningTime="2025-09-08 23:46:37.714578373 +0000 UTC m=+16.907891801" watchObservedRunningTime="2025-09-08 23:46:39.753435413 +0000 UTC m=+18.946748801" Sep 8 23:46:39.753750 kubelet[1755]: I0908 23:46:39.753722 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-ll9w8" podStartSLOduration=13.998676973 podStartE2EDuration="18.753715173s" podCreationTimestamp="2025-09-08 23:46:21 +0000 UTC" firstStartedPulling="2025-09-08 23:46:33.901761773 +0000 UTC m=+13.095075201" lastFinishedPulling="2025-09-08 23:46:38.656799973 +0000 UTC m=+17.850113401" observedRunningTime="2025-09-08 23:46:39.753569853 +0000 UTC m=+18.946883281" watchObservedRunningTime="2025-09-08 23:46:39.753715173 +0000 UTC m=+18.947028561" Sep 8 23:46:39.899636 systemd[1]: Created slice kubepods-besteffort-pod5dd7a4d0_fb3e_47df_95ab_7ef268654623.slice - libcontainer container kubepods-besteffort-pod5dd7a4d0_fb3e_47df_95ab_7ef268654623.slice. Sep 8 23:46:39.930839 kubelet[1755]: I0908 23:46:39.930767 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5dd7a4d0-fb3e-47df-95ab-7ef268654623-data\") pod \"nfs-server-provisioner-0\" (UID: \"5dd7a4d0-fb3e-47df-95ab-7ef268654623\") " pod="default/nfs-server-provisioner-0" Sep 8 23:46:39.930839 kubelet[1755]: I0908 23:46:39.930814 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbnhh\" (UniqueName: \"kubernetes.io/projected/5dd7a4d0-fb3e-47df-95ab-7ef268654623-kube-api-access-hbnhh\") pod \"nfs-server-provisioner-0\" (UID: \"5dd7a4d0-fb3e-47df-95ab-7ef268654623\") " pod="default/nfs-server-provisioner-0" Sep 8 23:46:40.204750 containerd[1439]: time="2025-09-08T23:46:40.204493573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5dd7a4d0-fb3e-47df-95ab-7ef268654623,Namespace:default,Attempt:0,}" Sep 8 23:46:40.329014 systemd-networkd[1362]: cali60e51b789ff: Link UP Sep 8 23:46:40.329779 systemd-networkd[1362]: cali60e51b789ff: Gained carrier Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.260 [INFO][2992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5dd7a4d0-fb3e-47df-95ab-7ef268654623 1264 0 2025-09-08 23:46:39 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.53 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.260 [INFO][2992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.285 [INFO][3006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" HandleID="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.286 [INFO][3006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" HandleID="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000127a70), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"nfs-server-provisioner-0", "timestamp":"2025-09-08 23:46:40.285852693 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.286 [INFO][3006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.286 [INFO][3006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.286 [INFO][3006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.295 [INFO][3006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.301 [INFO][3006] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.305 [INFO][3006] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.308 [INFO][3006] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.311 [INFO][3006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.311 [INFO][3006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.313 [INFO][3006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.319 [INFO][3006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.324 [INFO][3006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.195/26] block=192.168.100.192/26 handle="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.324 [INFO][3006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.195/26] handle="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" host="10.0.0.53" Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.324 [INFO][3006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 8 23:46:40.345777 containerd[1439]: 2025-09-08 23:46:40.324 [INFO][3006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.195/26] IPv6=[] ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" HandleID="k8s-pod-network.ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Workload="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.346504 containerd[1439]: 2025-09-08 23:46:40.326 [INFO][2992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5dd7a4d0-fb3e-47df-95ab-7ef268654623", ResourceVersion:"1264", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:40.346504 containerd[1439]: 2025-09-08 23:46:40.326 [INFO][2992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.195/32] ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.346504 containerd[1439]: 2025-09-08 23:46:40.326 [INFO][2992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.346504 containerd[1439]: 2025-09-08 23:46:40.330 [INFO][2992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.346642 containerd[1439]: 2025-09-08 23:46:40.333 [INFO][2992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5dd7a4d0-fb3e-47df-95ab-7ef268654623", ResourceVersion:"1264", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.100.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:47:32:19:f7:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:40.346642 containerd[1439]: 2025-09-08 23:46:40.343 [INFO][2992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.53-k8s-nfs--server--provisioner--0-eth0" Sep 8 23:46:40.364481 containerd[1439]: time="2025-09-08T23:46:40.364122013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:40.364826 containerd[1439]: time="2025-09-08T23:46:40.364575213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:40.364826 containerd[1439]: time="2025-09-08T23:46:40.364594693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:40.364826 containerd[1439]: time="2025-09-08T23:46:40.364686653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:40.393166 systemd[1]: Started cri-containerd-ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b.scope - libcontainer container ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b. Sep 8 23:46:40.405265 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:46:40.493569 containerd[1439]: time="2025-09-08T23:46:40.492255853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5dd7a4d0-fb3e-47df-95ab-7ef268654623,Namespace:default,Attempt:0,} returns sandbox id \"ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b\"" Sep 8 23:46:40.495205 containerd[1439]: time="2025-09-08T23:46:40.495145533Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 8 23:46:40.515642 kubelet[1755]: E0908 23:46:40.515575 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:41.502514 kubelet[1755]: E0908 23:46:41.502461 1755 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:41.516488 kubelet[1755]: E0908 23:46:41.516156 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:42.210616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183181858.mount: Deactivated successfully. Sep 8 23:46:42.280868 systemd-networkd[1362]: cali60e51b789ff: Gained IPv6LL Sep 8 23:46:42.516543 kubelet[1755]: E0908 23:46:42.516429 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:43.458739 containerd[1439]: time="2025-09-08T23:46:43.458649293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:43.459790 containerd[1439]: time="2025-09-08T23:46:43.459724773Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Sep 8 23:46:43.460917 containerd[1439]: time="2025-09-08T23:46:43.460883813Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:43.465037 containerd[1439]: time="2025-09-08T23:46:43.464982013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:43.466448 containerd[1439]: time="2025-09-08T23:46:43.466280413Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.97107996s" Sep 8 23:46:43.466448 containerd[1439]: time="2025-09-08T23:46:43.466320893Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 8 23:46:43.468727 containerd[1439]: time="2025-09-08T23:46:43.468688973Z" level=info msg="CreateContainer within sandbox \"ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 8 23:46:43.481850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886219012.mount: Deactivated successfully. Sep 8 23:46:43.488373 containerd[1439]: time="2025-09-08T23:46:43.488153813Z" level=info msg="CreateContainer within sandbox \"ce5c5d4449cf458f5195570d21345cb5f22f19fb33a91a474d600b4b643bb34b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"eb2eb834e5100955fd444f39fd4d27280946fbaaa634ff54a220c3573b8088e8\"" Sep 8 23:46:43.489095 containerd[1439]: time="2025-09-08T23:46:43.488854453Z" level=info msg="StartContainer for \"eb2eb834e5100955fd444f39fd4d27280946fbaaa634ff54a220c3573b8088e8\"" Sep 8 23:46:43.516830 kubelet[1755]: E0908 23:46:43.516658 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:43.520246 systemd[1]: Started cri-containerd-eb2eb834e5100955fd444f39fd4d27280946fbaaa634ff54a220c3573b8088e8.scope - libcontainer container eb2eb834e5100955fd444f39fd4d27280946fbaaa634ff54a220c3573b8088e8. Sep 8 23:46:43.572280 containerd[1439]: time="2025-09-08T23:46:43.572187973Z" level=info msg="StartContainer for \"eb2eb834e5100955fd444f39fd4d27280946fbaaa634ff54a220c3573b8088e8\" returns successfully" Sep 8 23:46:43.740560 kubelet[1755]: I0908 23:46:43.740105 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7670522929999999 podStartE2EDuration="4.740088373s" podCreationTimestamp="2025-09-08 23:46:39 +0000 UTC" firstStartedPulling="2025-09-08 23:46:40.494076613 +0000 UTC m=+19.687390041" lastFinishedPulling="2025-09-08 23:46:43.467112693 +0000 UTC m=+22.660426121" observedRunningTime="2025-09-08 23:46:43.740038413 +0000 UTC m=+22.933351841" watchObservedRunningTime="2025-09-08 23:46:43.740088373 +0000 UTC m=+22.933401801" Sep 8 23:46:44.517076 kubelet[1755]: E0908 23:46:44.517033 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:45.517445 kubelet[1755]: E0908 23:46:45.517390 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:46.518570 kubelet[1755]: E0908 23:46:46.518519 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:47.519337 kubelet[1755]: E0908 23:46:47.519289 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:48.519917 kubelet[1755]: E0908 23:46:48.519862 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:49.520355 kubelet[1755]: E0908 23:46:49.520300 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:50.520480 kubelet[1755]: E0908 23:46:50.520416 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:51.520537 kubelet[1755]: E0908 23:46:51.520504 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:52.521755 kubelet[1755]: E0908 23:46:52.521709 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:53.038025 systemd[1]: Created slice kubepods-besteffort-pod61767d45_05ba_4d32_9fd8_030be7c80285.slice - libcontainer container kubepods-besteffort-pod61767d45_05ba_4d32_9fd8_030be7c80285.slice. Sep 8 23:46:53.208332 kubelet[1755]: I0908 23:46:53.208271 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cctk2\" (UniqueName: \"kubernetes.io/projected/61767d45-05ba-4d32-9fd8-030be7c80285-kube-api-access-cctk2\") pod \"test-pod-1\" (UID: \"61767d45-05ba-4d32-9fd8-030be7c80285\") " pod="default/test-pod-1" Sep 8 23:46:53.208332 kubelet[1755]: I0908 23:46:53.208326 1755 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0a9c32c3-0d78-42b6-9806-880189c0e7aa\" (UniqueName: \"kubernetes.io/nfs/61767d45-05ba-4d32-9fd8-030be7c80285-pvc-0a9c32c3-0d78-42b6-9806-880189c0e7aa\") pod \"test-pod-1\" (UID: \"61767d45-05ba-4d32-9fd8-030be7c80285\") " pod="default/test-pod-1" Sep 8 23:46:53.337787 kernel: FS-Cache: Loaded Sep 8 23:46:53.364830 kernel: RPC: Registered named UNIX socket transport module. Sep 8 23:46:53.364941 kernel: RPC: Registered udp transport module. Sep 8 23:46:53.364963 kernel: RPC: Registered tcp transport module. Sep 8 23:46:53.364979 kernel: RPC: Registered tcp-with-tls transport module. Sep 8 23:46:53.364993 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 8 23:46:53.522116 kubelet[1755]: E0908 23:46:53.522066 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:53.537055 kernel: NFS: Registering the id_resolver key type Sep 8 23:46:53.537499 kernel: Key type id_resolver registered Sep 8 23:46:53.537552 kernel: Key type id_legacy registered Sep 8 23:46:53.609804 nfsidmap[3192]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 8 23:46:53.612080 nfsidmap[3193]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 8 23:46:53.641434 containerd[1439]: time="2025-09-08T23:46:53.641252309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:61767d45-05ba-4d32-9fd8-030be7c80285,Namespace:default,Attempt:0,}" Sep 8 23:46:53.796996 systemd-networkd[1362]: cali5ec59c6bf6e: Link UP Sep 8 23:46:53.798531 systemd-networkd[1362]: cali5ec59c6bf6e: Gained carrier Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.708 [INFO][3195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.53-k8s-test--pod--1-eth0 default 61767d45-05ba-4d32-9fd8-030be7c80285 1323 0 2025-09-08 23:46:40 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.53 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.708 [INFO][3195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.733 [INFO][3208] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" HandleID="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Workload="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.734 [INFO][3208] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" HandleID="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Workload="10.0.0.53-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b720), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.53", "pod":"test-pod-1", "timestamp":"2025-09-08 23:46:53.733738387 +0000 UTC"}, Hostname:"10.0.0.53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.734 [INFO][3208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.734 [INFO][3208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.734 [INFO][3208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.53' Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.751 [INFO][3208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.759 [INFO][3208] ipam/ipam.go 394: Looking up existing affinities for host host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.768 [INFO][3208] ipam/ipam.go 511: Trying affinity for 192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.771 [INFO][3208] ipam/ipam.go 158: Attempting to load block cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.774 [INFO][3208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.100.192/26 host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.774 [INFO][3208] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.100.192/26 handle="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.776 [INFO][3208] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153 Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.782 [INFO][3208] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.100.192/26 handle="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.793 [INFO][3208] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.100.196/26] block=192.168.100.192/26 handle="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.793 [INFO][3208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.100.196/26] handle="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" host="10.0.0.53" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.793 [INFO][3208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.793 [INFO][3208] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.100.196/26] IPv6=[] ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" HandleID="k8s-pod-network.8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Workload="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811233 containerd[1439]: 2025-09-08 23:46:53.795 [INFO][3195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"61767d45-05ba-4d32-9fd8-030be7c80285", ResourceVersion:"1323", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:53.811912 containerd[1439]: 2025-09-08 23:46:53.795 [INFO][3195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.100.196/32] ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811912 containerd[1439]: 2025-09-08 23:46:53.795 [INFO][3195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811912 containerd[1439]: 2025-09-08 23:46:53.798 [INFO][3195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.811912 containerd[1439]: 2025-09-08 23:46:53.799 [INFO][3195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.53-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"61767d45-05ba-4d32-9fd8-030be7c80285", ResourceVersion:"1323", Generation:0, CreationTimestamp:time.Date(2025, time.September, 8, 23, 46, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.53", ContainerID:"8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.100.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0a:a0:26:49:67:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 8 23:46:53.811912 containerd[1439]: 2025-09-08 23:46:53.807 [INFO][3195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.53-k8s-test--pod--1-eth0" Sep 8 23:46:53.845456 containerd[1439]: time="2025-09-08T23:46:53.845350591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 8 23:46:53.846574 containerd[1439]: time="2025-09-08T23:46:53.846364481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 8 23:46:53.846574 containerd[1439]: time="2025-09-08T23:46:53.846389600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:53.846574 containerd[1439]: time="2025-09-08T23:46:53.846492078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 8 23:46:53.867181 systemd[1]: Started cri-containerd-8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153.scope - libcontainer container 8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153. Sep 8 23:46:53.878168 systemd-resolved[1365]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:46:53.899740 containerd[1439]: time="2025-09-08T23:46:53.899547859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:61767d45-05ba-4d32-9fd8-030be7c80285,Namespace:default,Attempt:0,} returns sandbox id \"8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153\"" Sep 8 23:46:53.901245 containerd[1439]: time="2025-09-08T23:46:53.901114654Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 8 23:46:54.141460 containerd[1439]: time="2025-09-08T23:46:54.141408900Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:46:54.141876 containerd[1439]: time="2025-09-08T23:46:54.141838808Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Sep 8 23:46:54.145466 containerd[1439]: time="2025-09-08T23:46:54.145344433Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530\", size \"69986400\" in 244.19686ms" Sep 8 23:46:54.145466 containerd[1439]: time="2025-09-08T23:46:54.145381872Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 8 23:46:54.147617 containerd[1439]: time="2025-09-08T23:46:54.147584412Z" level=info msg="CreateContainer within sandbox \"8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 8 23:46:54.159513 containerd[1439]: time="2025-09-08T23:46:54.159388811Z" level=info msg="CreateContainer within sandbox \"8f7899970c789e11b3ab4ba5475e9d157f93857fcd7e8eef8125b26fce141153\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e1f74c23ffe463fb5abeb2bc9784b3fe99a9c39c2c097d8eb38e4a8021d37384\"" Sep 8 23:46:54.160089 containerd[1439]: time="2025-09-08T23:46:54.160061033Z" level=info msg="StartContainer for \"e1f74c23ffe463fb5abeb2bc9784b3fe99a9c39c2c097d8eb38e4a8021d37384\"" Sep 8 23:46:54.187147 systemd[1]: Started cri-containerd-e1f74c23ffe463fb5abeb2bc9784b3fe99a9c39c2c097d8eb38e4a8021d37384.scope - libcontainer container e1f74c23ffe463fb5abeb2bc9784b3fe99a9c39c2c097d8eb38e4a8021d37384. Sep 8 23:46:54.217736 containerd[1439]: time="2025-09-08T23:46:54.217674067Z" level=info msg="StartContainer for \"e1f74c23ffe463fb5abeb2bc9784b3fe99a9c39c2c097d8eb38e4a8021d37384\" returns successfully" Sep 8 23:46:54.522983 kubelet[1755]: E0908 23:46:54.522842 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:55.336104 systemd-networkd[1362]: cali5ec59c6bf6e: Gained IPv6LL Sep 8 23:46:55.523048 kubelet[1755]: E0908 23:46:55.523006 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:56.523212 kubelet[1755]: E0908 23:46:56.523142 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:57.524062 kubelet[1755]: E0908 23:46:57.524020 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 8 23:46:57.578062 update_engine[1429]: I20250908 23:46:57.577981 1429 update_attempter.cc:509] Updating boot flags... Sep 8 23:46:57.598958 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3339) Sep 8 23:46:57.632237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3343) Sep 8 23:46:58.525226 kubelet[1755]: E0908 23:46:58.525171 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"