May 13 23:23:16.902097 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 23:23:16.902118 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Tue May 13 22:07:09 -00 2025 May 13 23:23:16.902128 kernel: KASLR enabled May 13 23:23:16.902145 kernel: efi: EFI v2.7 by EDK II May 13 23:23:16.902151 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 May 13 23:23:16.902172 kernel: random: crng init done May 13 23:23:16.902179 kernel: secureboot: Secure boot disabled May 13 23:23:16.902185 kernel: ACPI: Early table checksum verification disabled May 13 23:23:16.902191 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 23:23:16.902199 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 23:23:16.902205 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902211 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902217 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902223 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902230 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902238 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902244 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902250 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902256 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 23:23:16.902262 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 23:23:16.902269 kernel: NUMA: Failed to initialise from firmware May 13 23:23:16.902275 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:23:16.902281 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 23:23:16.902287 kernel: Zone ranges: May 13 23:23:16.902293 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:23:16.902301 kernel: DMA32 empty May 13 23:23:16.902307 kernel: Normal empty May 13 23:23:16.902313 kernel: Movable zone start for each node May 13 23:23:16.902319 kernel: Early memory node ranges May 13 23:23:16.902325 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] May 13 23:23:16.902331 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] May 13 23:23:16.902338 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] May 13 23:23:16.902344 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 23:23:16.902350 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 23:23:16.902356 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 23:23:16.902362 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 23:23:16.902368 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 23:23:16.902376 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 23:23:16.902382 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 23:23:16.902389 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 23:23:16.902398 kernel: psci: probing for conduit method from ACPI. May 13 23:23:16.902404 kernel: psci: PSCIv1.1 detected in firmware. May 13 23:23:16.902411 kernel: psci: Using standard PSCI v0.2 function IDs May 13 23:23:16.902419 kernel: psci: Trusted OS migration not required May 13 23:23:16.902425 kernel: psci: SMC Calling Convention v1.1 May 13 23:23:16.902432 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 23:23:16.902439 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 23:23:16.902445 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 23:23:16.902452 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 23:23:16.902458 kernel: Detected PIPT I-cache on CPU0 May 13 23:23:16.902465 kernel: CPU features: detected: GIC system register CPU interface May 13 23:23:16.902471 kernel: CPU features: detected: Hardware dirty bit management May 13 23:23:16.902478 kernel: CPU features: detected: Spectre-v4 May 13 23:23:16.902486 kernel: CPU features: detected: Spectre-BHB May 13 23:23:16.902492 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 23:23:16.902499 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 23:23:16.902505 kernel: CPU features: detected: ARM erratum 1418040 May 13 23:23:16.902512 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 23:23:16.902518 kernel: alternatives: applying boot alternatives May 13 23:23:16.902526 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:23:16.902533 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:23:16.902539 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:23:16.902546 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 23:23:16.902553 kernel: Fallback order for Node 0: 0 May 13 23:23:16.902560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 23:23:16.902567 kernel: Policy zone: DMA May 13 23:23:16.902574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:23:16.902580 kernel: software IO TLB: area num 4. May 13 23:23:16.902586 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 23:23:16.902593 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8100K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) May 13 23:23:16.902600 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 23:23:16.902607 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:23:16.902614 kernel: rcu: RCU event tracing is enabled. May 13 23:23:16.902621 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 23:23:16.902627 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:23:16.902634 kernel: Tracing variant of Tasks RCU enabled. May 13 23:23:16.902642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:23:16.902649 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 23:23:16.902662 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 23:23:16.902669 kernel: GICv3: 256 SPIs implemented May 13 23:23:16.902675 kernel: GICv3: 0 Extended SPIs implemented May 13 23:23:16.902682 kernel: Root IRQ handler: gic_handle_irq May 13 23:23:16.902688 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 23:23:16.902694 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 23:23:16.902701 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 23:23:16.902708 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 23:23:16.902714 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 23:23:16.902723 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 23:23:16.902731 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 23:23:16.902738 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:23:16.902744 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:23:16.902751 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 23:23:16.902758 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 23:23:16.902765 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 23:23:16.902771 kernel: arm-pv: using stolen time PV May 13 23:23:16.902778 kernel: Console: colour dummy device 80x25 May 13 23:23:16.902785 kernel: ACPI: Core revision 20230628 May 13 23:23:16.902792 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 23:23:16.902800 kernel: pid_max: default: 32768 minimum: 301 May 13 23:23:16.902807 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:23:16.902814 kernel: landlock: Up and running. May 13 23:23:16.902820 kernel: SELinux: Initializing. May 13 23:23:16.902827 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:23:16.902834 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 23:23:16.902840 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 23:23:16.902848 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:23:16.902854 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 23:23:16.902863 kernel: rcu: Hierarchical SRCU implementation. May 13 23:23:16.902870 kernel: rcu: Max phase no-delay instances is 400. May 13 23:23:16.902876 kernel: Platform MSI: ITS@0x8080000 domain created May 13 23:23:16.902883 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 23:23:16.902890 kernel: Remapping and enabling EFI services. May 13 23:23:16.902896 kernel: smp: Bringing up secondary CPUs ... May 13 23:23:16.902903 kernel: Detected PIPT I-cache on CPU1 May 13 23:23:16.902909 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 23:23:16.902916 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 23:23:16.902925 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:23:16.902932 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 23:23:16.902944 kernel: Detected PIPT I-cache on CPU2 May 13 23:23:16.902953 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 23:23:16.902960 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 23:23:16.902968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:23:16.902974 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 23:23:16.902994 kernel: Detected PIPT I-cache on CPU3 May 13 23:23:16.903001 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 23:23:16.903008 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 23:23:16.903018 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 23:23:16.903025 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 23:23:16.903032 kernel: smp: Brought up 1 node, 4 CPUs May 13 23:23:16.903040 kernel: SMP: Total of 4 processors activated. May 13 23:23:16.903047 kernel: CPU features: detected: 32-bit EL0 Support May 13 23:23:16.903055 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 23:23:16.903062 kernel: CPU features: detected: Common not Private translations May 13 23:23:16.903071 kernel: CPU features: detected: CRC32 instructions May 13 23:23:16.903078 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 23:23:16.903085 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 23:23:16.903092 kernel: CPU features: detected: LSE atomic instructions May 13 23:23:16.903099 kernel: CPU features: detected: Privileged Access Never May 13 23:23:16.903106 kernel: CPU features: detected: RAS Extension Support May 13 23:23:16.903113 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 23:23:16.903120 kernel: CPU: All CPU(s) started at EL1 May 13 23:23:16.903127 kernel: alternatives: applying system-wide alternatives May 13 23:23:16.903194 kernel: devtmpfs: initialized May 13 23:23:16.903203 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:23:16.903210 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 23:23:16.903217 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:23:16.903224 kernel: SMBIOS 3.0.0 present. May 13 23:23:16.903231 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 23:23:16.903239 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:23:16.903246 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 23:23:16.903253 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 23:23:16.903261 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 23:23:16.903268 kernel: audit: initializing netlink subsys (disabled) May 13 23:23:16.903276 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 13 23:23:16.903283 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:23:16.903290 kernel: cpuidle: using governor menu May 13 23:23:16.903297 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 23:23:16.903304 kernel: ASID allocator initialised with 32768 entries May 13 23:23:16.903311 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:23:16.903318 kernel: Serial: AMBA PL011 UART driver May 13 23:23:16.903326 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 23:23:16.903334 kernel: Modules: 0 pages in range for non-PLT usage May 13 23:23:16.903341 kernel: Modules: 509264 pages in range for PLT usage May 13 23:23:16.903348 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:23:16.903355 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:23:16.903362 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 23:23:16.903369 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 23:23:16.903377 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:23:16.903384 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:23:16.903392 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 23:23:16.903400 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 23:23:16.903406 kernel: ACPI: Added _OSI(Module Device) May 13 23:23:16.903413 kernel: ACPI: Added _OSI(Processor Device) May 13 23:23:16.903420 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:23:16.903428 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:23:16.903435 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:23:16.903442 kernel: ACPI: Interpreter enabled May 13 23:23:16.903449 kernel: ACPI: Using GIC for interrupt routing May 13 23:23:16.903456 kernel: ACPI: MCFG table detected, 1 entries May 13 23:23:16.903465 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 23:23:16.903472 kernel: printk: console [ttyAMA0] enabled May 13 23:23:16.903479 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 23:23:16.903625 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 23:23:16.903710 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 23:23:16.903779 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 23:23:16.903842 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 23:23:16.903912 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 23:23:16.903922 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 23:23:16.903929 kernel: PCI host bridge to bus 0000:00 May 13 23:23:16.903999 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 23:23:16.904058 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 23:23:16.904116 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 23:23:16.904186 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 23:23:16.904274 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 23:23:16.904352 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 23:23:16.904419 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 23:23:16.904485 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 23:23:16.904550 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:23:16.904613 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 23:23:16.904704 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 23:23:16.904780 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 23:23:16.904853 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 23:23:16.904914 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 23:23:16.904971 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 23:23:16.904980 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 23:23:16.904988 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 23:23:16.904995 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 23:23:16.905005 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 23:23:16.905012 kernel: iommu: Default domain type: Translated May 13 23:23:16.905019 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 23:23:16.905026 kernel: efivars: Registered efivars operations May 13 23:23:16.905033 kernel: vgaarb: loaded May 13 23:23:16.905040 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 23:23:16.905047 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:23:16.905054 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:23:16.905061 kernel: pnp: PnP ACPI init May 13 23:23:16.905178 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 23:23:16.905191 kernel: pnp: PnP ACPI: found 1 devices May 13 23:23:16.905199 kernel: NET: Registered PF_INET protocol family May 13 23:23:16.905206 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:23:16.905214 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 23:23:16.905221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:23:16.905228 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 23:23:16.905235 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 23:23:16.905246 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 23:23:16.905253 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:23:16.905260 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 23:23:16.905267 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:23:16.905274 kernel: PCI: CLS 0 bytes, default 64 May 13 23:23:16.905281 kernel: kvm [1]: HYP mode not available May 13 23:23:16.905288 kernel: Initialise system trusted keyrings May 13 23:23:16.905296 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 23:23:16.905303 kernel: Key type asymmetric registered May 13 23:23:16.905312 kernel: Asymmetric key parser 'x509' registered May 13 23:23:16.905319 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 23:23:16.905326 kernel: io scheduler mq-deadline registered May 13 23:23:16.905333 kernel: io scheduler kyber registered May 13 23:23:16.905340 kernel: io scheduler bfq registered May 13 23:23:16.905348 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 23:23:16.905355 kernel: ACPI: button: Power Button [PWRB] May 13 23:23:16.905362 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 23:23:16.905432 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 23:23:16.905444 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:23:16.905452 kernel: thunder_xcv, ver 1.0 May 13 23:23:16.905459 kernel: thunder_bgx, ver 1.0 May 13 23:23:16.905466 kernel: nicpf, ver 1.0 May 13 23:23:16.905473 kernel: nicvf, ver 1.0 May 13 23:23:16.905545 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 23:23:16.905606 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T23:23:16 UTC (1747178596) May 13 23:23:16.905615 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:23:16.905625 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 23:23:16.905632 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 23:23:16.905639 kernel: watchdog: Hard watchdog permanently disabled May 13 23:23:16.905647 kernel: NET: Registered PF_INET6 protocol family May 13 23:23:16.905661 kernel: Segment Routing with IPv6 May 13 23:23:16.905669 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:23:16.905676 kernel: NET: Registered PF_PACKET protocol family May 13 23:23:16.905683 kernel: Key type dns_resolver registered May 13 23:23:16.905690 kernel: registered taskstats version 1 May 13 23:23:16.905697 kernel: Loading compiled-in X.509 certificates May 13 23:23:16.905707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: a696ab665a89a9a0c31af520821335479551e0bb' May 13 23:23:16.905714 kernel: Key type .fscrypt registered May 13 23:23:16.905721 kernel: Key type fscrypt-provisioning registered May 13 23:23:16.905728 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:23:16.905735 kernel: ima: Allocated hash algorithm: sha1 May 13 23:23:16.905743 kernel: ima: No architecture policies found May 13 23:23:16.905750 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 23:23:16.905757 kernel: clk: Disabling unused clocks May 13 23:23:16.905766 kernel: Freeing unused kernel memory: 38336K May 13 23:23:16.905773 kernel: Run /init as init process May 13 23:23:16.905781 kernel: with arguments: May 13 23:23:16.905787 kernel: /init May 13 23:23:16.905794 kernel: with environment: May 13 23:23:16.905801 kernel: HOME=/ May 13 23:23:16.905808 kernel: TERM=linux May 13 23:23:16.905816 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:23:16.905824 systemd[1]: Successfully made /usr/ read-only. May 13 23:23:16.905836 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:23:16.905845 systemd[1]: Detected virtualization kvm. May 13 23:23:16.905852 systemd[1]: Detected architecture arm64. May 13 23:23:16.905859 systemd[1]: Running in initrd. May 13 23:23:16.905867 systemd[1]: No hostname configured, using default hostname. May 13 23:23:16.905875 systemd[1]: Hostname set to . May 13 23:23:16.905882 systemd[1]: Initializing machine ID from VM UUID. May 13 23:23:16.905891 systemd[1]: Queued start job for default target initrd.target. May 13 23:23:16.905899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:23:16.905907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:23:16.905915 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:23:16.905923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:23:16.905931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:23:16.905939 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:23:16.905950 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:23:16.905958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:23:16.905965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:23:16.905973 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:23:16.905981 systemd[1]: Reached target paths.target - Path Units. May 13 23:23:16.905988 systemd[1]: Reached target slices.target - Slice Units. May 13 23:23:16.905996 systemd[1]: Reached target swap.target - Swaps. May 13 23:23:16.906004 systemd[1]: Reached target timers.target - Timer Units. May 13 23:23:16.906014 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:23:16.906021 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:23:16.906029 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:23:16.906037 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:23:16.906045 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:23:16.906052 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:23:16.906060 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:23:16.906068 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:23:16.906075 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:23:16.906085 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:23:16.906093 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:23:16.906101 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:23:16.906108 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:23:16.906116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:23:16.906124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:23:16.906151 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:23:16.906160 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:23:16.906171 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:23:16.906179 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:23:16.906187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:23:16.906216 systemd-journald[239]: Collecting audit messages is disabled. May 13 23:23:16.906237 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:23:16.906246 systemd-journald[239]: Journal started May 13 23:23:16.906264 systemd-journald[239]: Runtime Journal (/run/log/journal/b555ececbafa4412b2fa3ba124fb5685) is 5.9M, max 47.3M, 41.4M free. May 13 23:23:16.898568 systemd-modules-load[241]: Inserted module 'overlay' May 13 23:23:16.908698 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:23:16.910339 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:23:16.915163 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:23:16.916321 systemd-modules-load[241]: Inserted module 'br_netfilter' May 13 23:23:16.917160 kernel: Bridge firewalling registered May 13 23:23:16.926558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:23:16.929322 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:23:16.930789 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:23:16.933367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:23:16.941450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:23:16.942634 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:23:16.946282 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:23:16.948411 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:23:16.966332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:23:16.968513 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:23:16.978528 dracut-cmdline[278]: dracut-dracut-053 May 13 23:23:16.981258 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2ebbcf70ac37c458a177d0106bebb5016b2973cc84d1c0207dc60f43e2803902 May 13 23:23:17.004986 systemd-resolved[282]: Positive Trust Anchors: May 13 23:23:17.005009 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:23:17.005040 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:23:17.009892 systemd-resolved[282]: Defaulting to hostname 'linux'. May 13 23:23:17.010944 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:23:17.013458 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:23:17.059171 kernel: SCSI subsystem initialized May 13 23:23:17.064149 kernel: Loading iSCSI transport class v2.0-870. May 13 23:23:17.071149 kernel: iscsi: registered transport (tcp) May 13 23:23:17.085186 kernel: iscsi: registered transport (qla4xxx) May 13 23:23:17.085253 kernel: QLogic iSCSI HBA Driver May 13 23:23:17.133317 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:23:17.142355 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:23:17.162837 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:23:17.162907 kernel: device-mapper: uevent: version 1.0.3 May 13 23:23:17.164144 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:23:17.211167 kernel: raid6: neonx8 gen() 15769 MB/s May 13 23:23:17.228148 kernel: raid6: neonx4 gen() 15799 MB/s May 13 23:23:17.245147 kernel: raid6: neonx2 gen() 13286 MB/s May 13 23:23:17.262148 kernel: raid6: neonx1 gen() 10541 MB/s May 13 23:23:17.279150 kernel: raid6: int64x8 gen() 6791 MB/s May 13 23:23:17.296147 kernel: raid6: int64x4 gen() 7346 MB/s May 13 23:23:17.313147 kernel: raid6: int64x2 gen() 6105 MB/s May 13 23:23:17.330150 kernel: raid6: int64x1 gen() 5052 MB/s May 13 23:23:17.330165 kernel: raid6: using algorithm neonx4 gen() 15799 MB/s May 13 23:23:17.347153 kernel: raid6: .... xor() 12404 MB/s, rmw enabled May 13 23:23:17.347167 kernel: raid6: using neon recovery algorithm May 13 23:23:17.352482 kernel: xor: measuring software checksum speed May 13 23:23:17.352506 kernel: 8regs : 21630 MB/sec May 13 23:23:17.352516 kernel: 32regs : 21693 MB/sec May 13 23:23:17.353451 kernel: arm64_neon : 27936 MB/sec May 13 23:23:17.353465 kernel: xor: using function: arm64_neon (27936 MB/sec) May 13 23:23:17.405166 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:23:17.415998 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:23:17.430321 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:23:17.443351 systemd-udevd[465]: Using default interface naming scheme 'v255'. May 13 23:23:17.448801 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:23:17.464319 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:23:17.475820 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation May 13 23:23:17.503594 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:23:17.520339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:23:17.562118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:23:17.573322 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:23:17.584218 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:23:17.586070 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:23:17.589219 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:23:17.590840 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:23:17.598307 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:23:17.603222 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 23:23:17.608784 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 23:23:17.607617 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:23:17.615143 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 23:23:17.615167 kernel: GPT:9289727 != 19775487 May 13 23:23:17.615181 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 23:23:17.615190 kernel: GPT:9289727 != 19775487 May 13 23:23:17.615199 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 23:23:17.615210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:23:17.615560 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:23:17.615671 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:23:17.618034 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:23:17.619245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:23:17.619363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:23:17.622964 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:23:17.635150 kernel: BTRFS: device fsid 3ace022a-b896-4c57-9fc3-590600d2a560 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (521) May 13 23:23:17.635192 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (529) May 13 23:23:17.637565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:23:17.648862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:23:17.665856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 23:23:17.673787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 23:23:17.679869 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 23:23:17.680848 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 23:23:17.689701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:23:17.703295 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:23:17.705360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:23:17.710322 disk-uuid[560]: Primary Header is updated. May 13 23:23:17.710322 disk-uuid[560]: Secondary Entries is updated. May 13 23:23:17.710322 disk-uuid[560]: Secondary Header is updated. May 13 23:23:17.717159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:23:17.723153 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:23:17.728440 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:23:18.722388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 23:23:18.722455 disk-uuid[561]: The operation has completed successfully. May 13 23:23:18.745246 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:23:18.745348 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:23:18.787321 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:23:18.791990 sh[580]: Success May 13 23:23:18.806230 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 23:23:18.835208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:23:18.852580 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:23:18.856186 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:23:18.864209 kernel: BTRFS info (device dm-0): first mount of filesystem 3ace022a-b896-4c57-9fc3-590600d2a560 May 13 23:23:18.864250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 23:23:18.864269 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:23:18.865529 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:23:18.865542 kernel: BTRFS info (device dm-0): using free space tree May 13 23:23:18.869256 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:23:18.870675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:23:18.880292 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:23:18.882035 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:23:18.896609 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:23:18.896667 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:23:18.896679 kernel: BTRFS info (device vda6): using free space tree May 13 23:23:18.899153 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:23:18.903159 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:23:18.906899 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:23:18.912382 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:23:18.987074 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:23:18.997400 ignition[670]: Ignition 2.20.0 May 13 23:23:18.997411 ignition[670]: Stage: fetch-offline May 13 23:23:18.999348 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:23:18.997447 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 13 23:23:18.997455 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:18.997608 ignition[670]: parsed url from cmdline: "" May 13 23:23:18.997611 ignition[670]: no config URL provided May 13 23:23:18.997616 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:23:18.997624 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 13 23:23:18.997657 ignition[670]: op(1): [started] loading QEMU firmware config module May 13 23:23:18.997665 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 23:23:19.011620 ignition[670]: op(1): [finished] loading QEMU firmware config module May 13 23:23:19.017279 ignition[670]: parsing config with SHA512: 2f5d17d68816303f355414a619b029c59279768a1d1c722ca07de7abd53196feb06cb1b024a8b06fb317ce6ece7ad163db5d52d7d394736b77f70c43a13a655a May 13 23:23:19.020658 unknown[670]: fetched base config from "system" May 13 23:23:19.020669 unknown[670]: fetched user config from "qemu" May 13 23:23:19.020949 ignition[670]: fetch-offline: fetch-offline passed May 13 23:23:19.021019 ignition[670]: Ignition finished successfully May 13 23:23:19.025502 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:23:19.031723 systemd-networkd[768]: lo: Link UP May 13 23:23:19.031737 systemd-networkd[768]: lo: Gained carrier May 13 23:23:19.032596 systemd-networkd[768]: Enumeration completed May 13 23:23:19.032912 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:23:19.033029 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:23:19.033034 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:23:19.033853 systemd-networkd[768]: eth0: Link UP May 13 23:23:19.033856 systemd-networkd[768]: eth0: Gained carrier May 13 23:23:19.033864 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:23:19.034194 systemd[1]: Reached target network.target - Network. May 13 23:23:19.036037 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 23:23:19.044318 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:23:19.052197 systemd-networkd[768]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:23:19.057556 ignition[775]: Ignition 2.20.0 May 13 23:23:19.057566 ignition[775]: Stage: kargs May 13 23:23:19.057754 ignition[775]: no configs at "/usr/lib/ignition/base.d" May 13 23:23:19.057763 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:19.060813 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:23:19.058450 ignition[775]: kargs: kargs passed May 13 23:23:19.058494 ignition[775]: Ignition finished successfully May 13 23:23:19.069414 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:23:19.079513 ignition[785]: Ignition 2.20.0 May 13 23:23:19.079525 ignition[785]: Stage: disks May 13 23:23:19.079715 ignition[785]: no configs at "/usr/lib/ignition/base.d" May 13 23:23:19.079726 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:19.080437 ignition[785]: disks: disks passed May 13 23:23:19.082278 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:23:19.080487 ignition[785]: Ignition finished successfully May 13 23:23:19.083880 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:23:19.084931 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:23:19.085953 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:23:19.086696 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:23:19.087954 systemd[1]: Reached target basic.target - Basic System. May 13 23:23:19.104346 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:23:19.114608 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 23:23:19.118868 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:23:19.121889 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:23:19.167146 kernel: EXT4-fs (vda9): mounted filesystem 2a058080-4242-485a-9945-403b4258c5f5 r/w with ordered data mode. Quota mode: none. May 13 23:23:19.167605 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:23:19.168692 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:23:19.192249 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:23:19.194152 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:23:19.195314 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 23:23:19.195363 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:23:19.195389 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:23:19.201866 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) May 13 23:23:19.201230 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:23:19.205326 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:23:19.205421 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:23:19.205450 kernel: BTRFS info (device vda6): using free space tree May 13 23:23:19.204497 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:23:19.207025 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:23:19.208862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:23:19.249453 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:23:19.253876 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory May 13 23:23:19.258689 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:23:19.263189 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:23:19.341749 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:23:19.352288 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:23:19.354913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:23:19.360151 kernel: BTRFS info (device vda6): last unmount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:23:19.381499 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:23:19.383205 ignition[917]: INFO : Ignition 2.20.0 May 13 23:23:19.383205 ignition[917]: INFO : Stage: mount May 13 23:23:19.383205 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:23:19.383205 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:19.387814 ignition[917]: INFO : mount: mount passed May 13 23:23:19.387814 ignition[917]: INFO : Ignition finished successfully May 13 23:23:19.385373 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:23:19.396302 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:23:20.002771 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:23:20.011306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:23:20.017154 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (931) May 13 23:23:20.019436 kernel: BTRFS info (device vda6): first mount of filesystem cae47f07-14c5-46aa-b49d-052e48518cb3 May 13 23:23:20.019521 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 23:23:20.019533 kernel: BTRFS info (device vda6): using free space tree May 13 23:23:20.022161 kernel: BTRFS info (device vda6): auto enabling async discard May 13 23:23:20.022730 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:23:20.039198 ignition[948]: INFO : Ignition 2.20.0 May 13 23:23:20.039198 ignition[948]: INFO : Stage: files May 13 23:23:20.040690 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:23:20.040690 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:20.040690 ignition[948]: DEBUG : files: compiled without relabeling support, skipping May 13 23:23:20.043444 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:23:20.043444 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:23:20.045363 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:23:20.045363 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:23:20.045363 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:23:20.044785 unknown[948]: wrote ssh authorized keys file for user: core May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:23:20.049402 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 23:23:20.118296 systemd-networkd[768]: eth0: Gained IPv6LL May 13 23:23:20.390891 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 23:23:20.717167 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 23:23:20.717167 ignition[948]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 23:23:20.720333 ignition[948]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:23:20.720333 ignition[948]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 23:23:20.720333 ignition[948]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 23:23:20.720333 ignition[948]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 23:23:20.730877 ignition[948]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:23:20.734242 ignition[948]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 23:23:20.735526 ignition[948]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 23:23:20.735526 ignition[948]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:23:20.735526 ignition[948]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:23:20.735526 ignition[948]: INFO : files: files passed May 13 23:23:20.735526 ignition[948]: INFO : Ignition finished successfully May 13 23:23:20.737043 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:23:20.745323 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:23:20.747595 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:23:20.749797 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:23:20.750700 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:23:20.754305 initrd-setup-root-after-ignition[976]: grep: /sysroot/oem/oem-release: No such file or directory May 13 23:23:20.758536 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:23:20.758536 initrd-setup-root-after-ignition[978]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:23:20.761369 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:23:20.763383 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:23:20.764863 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:23:20.776339 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:23:20.795247 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:23:20.796209 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:23:20.797529 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:23:20.799175 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:23:20.800966 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:23:20.801873 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:23:20.817280 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:23:20.828332 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:23:20.836723 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:23:20.837866 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:23:20.839724 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:23:20.841105 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:23:20.841247 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:23:20.843233 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:23:20.844821 systemd[1]: Stopped target basic.target - Basic System. May 13 23:23:20.846155 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:23:20.847759 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:23:20.849292 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:23:20.850884 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:23:20.852339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:23:20.853963 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:23:20.855625 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:23:20.856995 systemd[1]: Stopped target swap.target - Swaps. May 13 23:23:20.858371 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:23:20.858506 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:23:20.860447 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:23:20.862231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:23:20.863933 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:23:20.867226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:23:20.868301 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:23:20.868426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:23:20.870859 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:23:20.870978 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:23:20.872727 systemd[1]: Stopped target paths.target - Path Units. May 13 23:23:20.874258 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:23:20.875764 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:23:20.877603 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:23:20.879435 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:23:20.880843 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:23:20.880927 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:23:20.882237 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:23:20.882309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:23:20.883897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:23:20.884007 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:23:20.885492 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:23:20.885588 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:23:20.897322 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:23:20.898817 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:23:20.899561 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:23:20.899687 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:23:20.901269 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:23:20.901373 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:23:20.907362 ignition[1002]: INFO : Ignition 2.20.0 May 13 23:23:20.907362 ignition[1002]: INFO : Stage: umount May 13 23:23:20.908680 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:23:20.908680 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 23:23:20.908680 ignition[1002]: INFO : umount: umount passed May 13 23:23:20.908680 ignition[1002]: INFO : Ignition finished successfully May 13 23:23:20.907990 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:23:20.908335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:23:20.911296 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:23:20.911383 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:23:20.913908 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:23:20.914304 systemd[1]: Stopped target network.target - Network. May 13 23:23:20.915407 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:23:20.915459 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:23:20.917280 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:23:20.917322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:23:20.918904 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:23:20.918946 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:23:20.920566 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:23:20.920619 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:23:20.922224 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:23:20.925205 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:23:20.926903 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:23:20.926994 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:23:20.928186 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:23:20.928288 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:23:20.931245 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:23:20.932463 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:23:20.932528 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:23:20.934796 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:23:20.934847 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:23:20.938073 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:23:20.938392 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:23:20.938518 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:23:20.941041 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:23:20.941124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:23:20.949267 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:23:20.950146 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:23:20.950201 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:23:20.951964 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:23:20.952004 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:23:20.954648 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:23:20.954690 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:23:20.956263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:23:20.966466 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:23:20.966591 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:23:20.971719 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:23:20.971880 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:23:20.973926 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:23:20.973964 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:23:20.975021 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:23:20.975052 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:23:20.977000 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:23:20.977048 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:23:20.979615 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:23:20.979672 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:23:20.981961 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:23:20.982002 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:23:20.998290 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:23:20.999172 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:23:20.999230 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:23:21.001989 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 23:23:21.002030 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:23:21.004178 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:23:21.004223 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:23:21.006068 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:23:21.006117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:23:21.009596 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:23:21.009657 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:23:21.009696 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:23:21.009727 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:23:21.010011 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:23:21.010094 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:23:21.012365 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:23:21.024271 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:23:21.030246 systemd[1]: Switching root. May 13 23:23:21.066287 systemd-journald[239]: Journal stopped May 13 23:23:21.782209 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 13 23:23:21.782273 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:23:21.782293 kernel: SELinux: policy capability open_perms=1 May 13 23:23:21.782303 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:23:21.782313 kernel: SELinux: policy capability always_check_network=0 May 13 23:23:21.782322 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:23:21.782336 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:23:21.782346 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:23:21.782355 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:23:21.782365 kernel: audit: type=1403 audit(1747178601.183:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:23:21.782375 systemd[1]: Successfully loaded SELinux policy in 32.093ms. May 13 23:23:21.782391 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.843ms. May 13 23:23:21.782402 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:23:21.782415 systemd[1]: Detected virtualization kvm. May 13 23:23:21.782428 systemd[1]: Detected architecture arm64. May 13 23:23:21.782439 systemd[1]: Detected first boot. May 13 23:23:21.782451 systemd[1]: Initializing machine ID from VM UUID. May 13 23:23:21.782462 zram_generator::config[1048]: No configuration found. May 13 23:23:21.782474 kernel: NET: Registered PF_VSOCK protocol family May 13 23:23:21.782484 systemd[1]: Populated /etc with preset unit settings. May 13 23:23:21.782495 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:23:21.782506 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:23:21.782517 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:23:21.782528 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:23:21.782580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:23:21.782592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:23:21.782603 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:23:21.782613 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:23:21.782623 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:23:21.782634 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:23:21.782655 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:23:21.782666 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:23:21.782677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:23:21.782690 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:23:21.782700 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:23:21.782712 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:23:21.782723 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:23:21.782734 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:23:21.782748 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 23:23:21.782759 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:23:21.782769 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:23:21.782782 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:23:21.782792 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:23:21.782803 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:23:21.782814 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:23:21.782825 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:23:21.782835 systemd[1]: Reached target slices.target - Slice Units. May 13 23:23:21.782845 systemd[1]: Reached target swap.target - Swaps. May 13 23:23:21.782856 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:23:21.782866 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:23:21.782878 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:23:21.782889 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:23:21.782899 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:23:21.782909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:23:21.782920 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:23:21.782930 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:23:21.782942 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:23:21.782952 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:23:21.782962 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:23:21.782975 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:23:21.782985 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:23:21.782997 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:23:21.783007 systemd[1]: Reached target machines.target - Containers. May 13 23:23:21.783018 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:23:21.783029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:23:21.783040 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:23:21.783051 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:23:21.783063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:23:21.783074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:23:21.783085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:23:21.783096 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:23:21.783113 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:23:21.783124 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:23:21.783170 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:23:21.783185 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:23:21.783197 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:23:21.783210 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:23:21.783222 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:23:21.783233 kernel: fuse: init (API version 7.39) May 13 23:23:21.783243 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:23:21.783254 kernel: loop: module loaded May 13 23:23:21.783264 kernel: ACPI: bus type drm_connector registered May 13 23:23:21.783276 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:23:21.783289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:23:21.783300 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:23:21.783311 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:23:21.783321 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:23:21.783335 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:23:21.783346 systemd[1]: Stopped verity-setup.service. May 13 23:23:21.783357 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:23:21.783367 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:23:21.783378 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:23:21.783413 systemd-journald[1123]: Collecting audit messages is disabled. May 13 23:23:21.783435 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:23:21.783446 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:23:21.783459 systemd-journald[1123]: Journal started May 13 23:23:21.783481 systemd-journald[1123]: Runtime Journal (/run/log/journal/b555ececbafa4412b2fa3ba124fb5685) is 5.9M, max 47.3M, 41.4M free. May 13 23:23:21.580797 systemd[1]: Queued start job for default target multi-user.target. May 13 23:23:21.594394 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 23:23:21.594806 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:23:21.786708 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:23:21.787371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:23:21.788527 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:23:21.791176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:23:21.792362 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:23:21.792527 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:23:21.793822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:23:21.793994 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:23:21.795283 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:23:21.795443 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:23:21.796558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:23:21.796741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:23:21.798181 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:23:21.798341 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:23:21.799459 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:23:21.799617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:23:21.800904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:23:21.802445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:23:21.803699 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:23:21.805021 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:23:21.817614 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:23:21.828259 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:23:21.830330 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:23:21.831236 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:23:21.831275 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:23:21.833057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:23:21.835342 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:23:21.837632 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:23:21.838589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:23:21.839999 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:23:21.841904 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:23:21.843066 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:23:21.846328 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:23:21.847408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:23:21.850388 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:23:21.853080 systemd-journald[1123]: Time spent on flushing to /var/log/journal/b555ececbafa4412b2fa3ba124fb5685 is 14.043ms for 854 entries. May 13 23:23:21.853080 systemd-journald[1123]: System Journal (/var/log/journal/b555ececbafa4412b2fa3ba124fb5685) is 8M, max 195.6M, 187.6M free. May 13 23:23:21.875002 systemd-journald[1123]: Received client request to flush runtime journal. May 13 23:23:21.857450 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:23:21.863112 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:23:21.866857 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:23:21.873120 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:23:21.874122 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:23:21.875406 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:23:21.877526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:23:21.880718 kernel: loop0: detected capacity change from 0 to 113512 May 13 23:23:21.880108 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:23:21.881830 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:23:21.885497 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. May 13 23:23:21.885516 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. May 13 23:23:21.894231 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:23:21.896235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:23:21.901851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:23:21.910326 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:23:21.913027 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:23:21.916366 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:23:21.929160 kernel: loop1: detected capacity change from 0 to 123192 May 13 23:23:21.932217 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:23:21.936154 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:23:21.945057 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:23:21.953329 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:23:21.966056 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:23:21.966404 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. May 13 23:23:21.970696 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:23:21.971255 kernel: loop2: detected capacity change from 0 to 194096 May 13 23:23:22.014186 kernel: loop3: detected capacity change from 0 to 113512 May 13 23:23:22.019248 kernel: loop4: detected capacity change from 0 to 123192 May 13 23:23:22.025174 kernel: loop5: detected capacity change from 0 to 194096 May 13 23:23:22.030413 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 23:23:22.030845 (sd-merge)[1194]: Merged extensions into '/usr'. May 13 23:23:22.036488 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:23:22.036502 systemd[1]: Reloading... May 13 23:23:22.098732 zram_generator::config[1222]: No configuration found. May 13 23:23:22.127023 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:23:22.190091 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:23:22.240253 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:23:22.240722 systemd[1]: Reloading finished in 203 ms. May 13 23:23:22.257954 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:23:22.259526 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:23:22.271545 systemd[1]: Starting ensure-sysext.service... May 13 23:23:22.273664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:23:22.285179 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... May 13 23:23:22.285195 systemd[1]: Reloading... May 13 23:23:22.291499 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:23:22.291721 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:23:22.292368 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:23:22.292576 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 13 23:23:22.292629 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. May 13 23:23:22.295179 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:23:22.295190 systemd-tmpfiles[1257]: Skipping /boot May 13 23:23:22.304586 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:23:22.304606 systemd-tmpfiles[1257]: Skipping /boot May 13 23:23:22.337177 zram_generator::config[1286]: No configuration found. May 13 23:23:22.420594 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:23:22.471121 systemd[1]: Reloading finished in 185 ms. May 13 23:23:22.479791 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:23:22.494190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:23:22.501964 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:23:22.504417 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:23:22.506590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:23:22.512481 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:23:22.515789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:23:22.520395 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:23:22.524420 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:23:22.528779 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:23:22.537855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:23:22.540271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:23:22.541422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:23:22.541548 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:23:22.542478 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:23:22.543209 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:23:22.547553 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:23:22.548121 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:23:22.550097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:23:22.550314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:23:22.551961 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:23:22.559623 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:23:22.559838 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:23:22.566976 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:23:22.569543 systemd-udevd[1329]: Using default interface naming scheme 'v255'. May 13 23:23:22.573414 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:23:22.575634 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:23:22.577298 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:23:22.581957 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:23:22.593418 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:23:22.597093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:23:22.599941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:23:22.600964 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:23:22.601092 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:23:22.603239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:23:22.605563 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:23:22.607631 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:23:22.607812 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:23:22.609382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:23:22.610321 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:23:22.618845 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:23:22.630573 systemd[1]: Finished ensure-sysext.service. May 13 23:23:22.633218 augenrules[1388]: No rules May 13 23:23:22.634945 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:23:22.636305 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:23:22.650024 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:23:22.650281 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:23:22.653366 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 23:23:22.653958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:23:22.659030 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:23:22.663321 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:23:22.666343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:23:22.667308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:23:22.667361 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:23:22.669363 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:23:22.673281 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 23:23:22.674105 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:23:22.674689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:23:22.674859 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:23:22.676391 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:23:22.676551 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:23:22.678017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:23:22.678189 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:23:22.682214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:23:22.682262 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:23:22.708064 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1378) May 13 23:23:22.707319 systemd-resolved[1326]: Positive Trust Anchors: May 13 23:23:22.707340 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:23:22.707371 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:23:22.713072 systemd-resolved[1326]: Defaulting to hostname 'linux'. May 13 23:23:22.719904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:23:22.728820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:23:22.743409 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 23:23:22.755424 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:23:22.773777 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 23:23:22.774750 systemd-networkd[1400]: lo: Link UP May 13 23:23:22.774974 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:23:22.775082 systemd-networkd[1400]: lo: Gained carrier May 13 23:23:22.776208 systemd-networkd[1400]: Enumeration completed May 13 23:23:22.776558 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:23:22.777620 systemd[1]: Reached target network.target - Network. May 13 23:23:22.783843 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:23:22.783930 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:23:22.784499 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:23:22.784600 systemd-networkd[1400]: eth0: Link UP May 13 23:23:22.784656 systemd-networkd[1400]: eth0: Gained carrier May 13 23:23:22.784705 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:23:22.790477 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:23:22.793785 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:23:22.795694 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:23:22.806248 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 23:23:22.807036 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. May 13 23:23:22.808475 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 23:23:22.808916 systemd-timesyncd[1401]: Initial clock synchronization to Tue 2025-05-13 23:23:22.996230 UTC. May 13 23:23:22.813260 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:23:22.834316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:23:22.835542 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:23:22.838841 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:23:22.866592 lvm[1425]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:23:22.882339 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:23:22.901201 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:23:22.902397 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:23:22.903275 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:23:22.904111 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:23:22.904982 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:23:22.906188 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:23:22.907068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:23:22.908127 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:23:22.909028 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:23:22.909066 systemd[1]: Reached target paths.target - Path Units. May 13 23:23:22.909871 systemd[1]: Reached target timers.target - Timer Units. May 13 23:23:22.911717 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:23:22.914011 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:23:22.917708 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:23:22.918828 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:23:22.919975 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:23:22.922998 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:23:22.924506 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:23:22.926491 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:23:22.927903 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:23:22.928811 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:23:22.929558 systemd[1]: Reached target basic.target - Basic System. May 13 23:23:22.930352 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:23:22.930384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:23:22.931238 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:23:22.932947 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:23:22.935266 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:23:22.936285 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:23:22.939318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:23:22.941519 jq[1436]: false May 13 23:23:22.941600 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:23:22.942602 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:23:22.946373 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:23:22.951293 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:23:22.954171 extend-filesystems[1437]: Found loop3 May 13 23:23:22.954171 extend-filesystems[1437]: Found loop4 May 13 23:23:22.954171 extend-filesystems[1437]: Found loop5 May 13 23:23:22.954171 extend-filesystems[1437]: Found vda May 13 23:23:22.954171 extend-filesystems[1437]: Found vda1 May 13 23:23:22.954171 extend-filesystems[1437]: Found vda2 May 13 23:23:22.954171 extend-filesystems[1437]: Found vda3 May 13 23:23:22.954171 extend-filesystems[1437]: Found usr May 13 23:23:22.954171 extend-filesystems[1437]: Found vda4 May 13 23:23:22.954171 extend-filesystems[1437]: Found vda6 May 13 23:23:22.956383 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:23:22.970446 extend-filesystems[1437]: Found vda7 May 13 23:23:22.970446 extend-filesystems[1437]: Found vda9 May 13 23:23:22.970446 extend-filesystems[1437]: Checking size of /dev/vda9 May 13 23:23:22.970446 extend-filesystems[1437]: Resized partition /dev/vda9 May 13 23:23:22.959716 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:23:22.977067 extend-filesystems[1457]: resize2fs 1.47.1 (20-May-2024) May 13 23:23:22.971883 dbus-daemon[1435]: [system] SELinux support is enabled May 13 23:23:22.961266 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:23:22.961999 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:23:22.967264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:23:22.971225 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:23:22.974344 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:23:22.980628 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:23:22.980830 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:23:22.982193 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 23:23:22.982220 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:23:22.982488 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:23:22.983657 jq[1456]: true May 13 23:23:22.983710 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:23:22.983881 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:23:22.994184 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1359) May 13 23:23:23.006176 update_engine[1453]: I20250513 23:23:23.005069 1453 main.cc:92] Flatcar Update Engine starting May 13 23:23:23.008281 jq[1459]: true May 13 23:23:23.015385 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 23:23:23.031584 update_engine[1453]: I20250513 23:23:23.015002 1453 update_check_scheduler.cc:74] Next update check in 6m20s May 13 23:23:23.015802 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:23:23.047709 extend-filesystems[1457]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 23:23:23.047709 extend-filesystems[1457]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 23:23:23.047709 extend-filesystems[1457]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 23:23:23.023319 systemd[1]: Started update-engine.service - Update Engine. May 13 23:23:23.056303 extend-filesystems[1437]: Resized filesystem in /dev/vda9 May 13 23:23:23.024789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:23:23.024814 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:23:23.058167 bash[1484]: Updated "/home/core/.ssh/authorized_keys" May 13 23:23:23.026219 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:23:23.026237 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:23:23.047457 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:23:23.049048 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:23:23.049274 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:23:23.052512 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:23:23.058292 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:23:23.063450 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (Power Button) May 13 23:23:23.067571 systemd-logind[1446]: New seat seat0. May 13 23:23:23.072240 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:23:23.101517 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:23:23.236748 containerd[1467]: time="2025-05-13T23:23:23.236592563Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 23:23:23.260917 containerd[1467]: time="2025-05-13T23:23:23.260862237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.262496 containerd[1467]: time="2025-05-13T23:23:23.262441841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 23:23:23.262496 containerd[1467]: time="2025-05-13T23:23:23.262477336Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 23:23:23.262496 containerd[1467]: time="2025-05-13T23:23:23.262493916Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 23:23:23.262696 containerd[1467]: time="2025-05-13T23:23:23.262658697Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 23:23:23.262724 containerd[1467]: time="2025-05-13T23:23:23.262697466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.262773 containerd[1467]: time="2025-05-13T23:23:23.262757279Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:23:23.262794 containerd[1467]: time="2025-05-13T23:23:23.262774514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.263956 containerd[1467]: time="2025-05-13T23:23:23.263924051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:23:23.264029 containerd[1467]: time="2025-05-13T23:23:23.263971786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.264029 containerd[1467]: time="2025-05-13T23:23:23.263987957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:23:23.264029 containerd[1467]: time="2025-05-13T23:23:23.263997291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.264318 containerd[1467]: time="2025-05-13T23:23:23.264088054Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.264318 containerd[1467]: time="2025-05-13T23:23:23.264311214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 23:23:23.264460 containerd[1467]: time="2025-05-13T23:23:23.264438577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 23:23:23.264460 containerd[1467]: time="2025-05-13T23:23:23.264459292Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 23:23:23.264560 containerd[1467]: time="2025-05-13T23:23:23.264543013Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 23:23:23.264608 containerd[1467]: time="2025-05-13T23:23:23.264595497Z" level=info msg="metadata content store policy set" policy=shared May 13 23:23:23.267733 containerd[1467]: time="2025-05-13T23:23:23.267701567Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 23:23:23.267808 containerd[1467]: time="2025-05-13T23:23:23.267751349Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 23:23:23.267808 containerd[1467]: time="2025-05-13T23:23:23.267767111Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 23:23:23.267808 containerd[1467]: time="2025-05-13T23:23:23.267789341Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 23:23:23.267808 containerd[1467]: time="2025-05-13T23:23:23.267807477Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 23:23:23.267977 containerd[1467]: time="2025-05-13T23:23:23.267940243Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268298504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268457553Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268475771Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268491778Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268505493Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268517857Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268530179Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268543894Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268558182Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268570791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268583646Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268595641Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268619591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269088 containerd[1467]: time="2025-05-13T23:23:23.268634002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268646693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268659016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268672239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268688083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268699464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268713219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268725829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268739339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268751948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268764230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268776593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268790513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268829610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268844839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 23:23:23.269376 containerd[1467]: time="2025-05-13T23:23:23.268857162Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 23:23:23.269706 containerd[1467]: time="2025-05-13T23:23:23.269684219Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 23:23:23.269868 containerd[1467]: time="2025-05-13T23:23:23.269847813Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 23:23:23.269929 containerd[1467]: time="2025-05-13T23:23:23.269916263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 23:23:23.269984 containerd[1467]: time="2025-05-13T23:23:23.269970467Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 23:23:23.270044 containerd[1467]: time="2025-05-13T23:23:23.270030607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 23:23:23.270109 containerd[1467]: time="2025-05-13T23:23:23.270095619Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 23:23:23.270174 containerd[1467]: time="2025-05-13T23:23:23.270162145Z" level=info msg="NRI interface is disabled by configuration." May 13 23:23:23.270234 containerd[1467]: time="2025-05-13T23:23:23.270221753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 23:23:23.270682 containerd[1467]: time="2025-05-13T23:23:23.270629714Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 23:23:23.270878 containerd[1467]: time="2025-05-13T23:23:23.270843417Z" level=info msg="Connect containerd service" May 13 23:23:23.270982 containerd[1467]: time="2025-05-13T23:23:23.270966931Z" level=info msg="using legacy CRI server" May 13 23:23:23.271155 containerd[1467]: time="2025-05-13T23:23:23.271121559Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:23:23.271440 containerd[1467]: time="2025-05-13T23:23:23.271421440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 23:23:23.272295 containerd[1467]: time="2025-05-13T23:23:23.272261025Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:23:23.272623 containerd[1467]: time="2025-05-13T23:23:23.272591815Z" level=info msg="Start subscribing containerd event" May 13 23:23:23.272713 containerd[1467]: time="2025-05-13T23:23:23.272699444Z" level=info msg="Start recovering state" May 13 23:23:23.272841 containerd[1467]: time="2025-05-13T23:23:23.272826315Z" level=info msg="Start event monitor" May 13 23:23:23.273410 containerd[1467]: time="2025-05-13T23:23:23.273390787Z" level=info msg="Start snapshots syncer" May 13 23:23:23.273481 containerd[1467]: time="2025-05-13T23:23:23.273469145Z" level=info msg="Start cni network conf syncer for default" May 13 23:23:23.273529 containerd[1467]: time="2025-05-13T23:23:23.273517454Z" level=info msg="Start streaming server" May 13 23:23:23.273737 containerd[1467]: time="2025-05-13T23:23:23.273364791Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:23:23.276779 containerd[1467]: time="2025-05-13T23:23:23.273832154Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:23:23.276779 containerd[1467]: time="2025-05-13T23:23:23.273895406Z" level=info msg="containerd successfully booted in 0.040047s" May 13 23:23:23.275285 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:23:23.960965 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:23:23.979445 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:23:23.987521 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:23:23.992747 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:23:23.992970 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:23:23.995679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:23:24.010212 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:23:24.013071 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:23:24.015288 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 23:23:24.016488 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:23:24.407333 systemd-networkd[1400]: eth0: Gained IPv6LL May 13 23:23:24.409987 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:23:24.411897 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:23:24.424474 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:23:24.426907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:23:24.429115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:23:24.445186 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:23:24.445450 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:23:24.447392 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:23:24.455252 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:23:24.913088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:23:24.914647 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:23:24.916843 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:23:24.919272 systemd[1]: Startup finished in 602ms (kernel) + 4.493s (initrd) + 3.768s (userspace) = 8.864s. May 13 23:23:25.390625 kubelet[1542]: E0513 23:23:25.390500 1542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:23:25.393270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:23:25.393419 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:23:25.395259 systemd[1]: kubelet.service: Consumed 820ms CPU time, 245.4M memory peak. May 13 23:23:29.612811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:23:29.614306 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). May 13 23:23:29.680043 sshd[1557]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:29.681978 sshd-session[1557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:29.689549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:23:29.699412 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:23:29.705039 systemd-logind[1446]: New session 1 of user core. May 13 23:23:29.708514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:23:29.710906 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:23:29.717132 (systemd)[1561]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:23:29.719243 systemd-logind[1446]: New session c1 of user core. May 13 23:23:29.818345 systemd[1561]: Queued start job for default target default.target. May 13 23:23:29.830060 systemd[1561]: Created slice app.slice - User Application Slice. May 13 23:23:29.830091 systemd[1561]: Reached target paths.target - Paths. May 13 23:23:29.830137 systemd[1561]: Reached target timers.target - Timers. May 13 23:23:29.831386 systemd[1561]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:23:29.840031 systemd[1561]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:23:29.840097 systemd[1561]: Reached target sockets.target - Sockets. May 13 23:23:29.840135 systemd[1561]: Reached target basic.target - Basic System. May 13 23:23:29.840205 systemd[1561]: Reached target default.target - Main User Target. May 13 23:23:29.840243 systemd[1561]: Startup finished in 115ms. May 13 23:23:29.840380 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:23:29.841589 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:23:29.901685 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:40648.service - OpenSSH per-connection server daemon (10.0.0.1:40648). May 13 23:23:29.943711 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 40648 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:29.944847 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:29.948835 systemd-logind[1446]: New session 2 of user core. May 13 23:23:29.956343 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:23:30.009586 sshd[1574]: Connection closed by 10.0.0.1 port 40648 May 13 23:23:30.009948 sshd-session[1572]: pam_unix(sshd:session): session closed for user core May 13 23:23:30.024337 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:40648.service: Deactivated successfully. May 13 23:23:30.025832 systemd[1]: session-2.scope: Deactivated successfully. May 13 23:23:30.027409 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. May 13 23:23:30.028285 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:40652.service - OpenSSH per-connection server daemon (10.0.0.1:40652). May 13 23:23:30.028969 systemd-logind[1446]: Removed session 2. May 13 23:23:30.071362 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 40652 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:30.072545 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:30.076665 systemd-logind[1446]: New session 3 of user core. May 13 23:23:30.086286 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:23:30.134657 sshd[1582]: Connection closed by 10.0.0.1 port 40652 May 13 23:23:30.134920 sshd-session[1579]: pam_unix(sshd:session): session closed for user core May 13 23:23:30.156271 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:40652.service: Deactivated successfully. May 13 23:23:30.157656 systemd[1]: session-3.scope: Deactivated successfully. May 13 23:23:30.159446 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. May 13 23:23:30.170570 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:40654.service - OpenSSH per-connection server daemon (10.0.0.1:40654). May 13 23:23:30.171923 systemd-logind[1446]: Removed session 3. May 13 23:23:30.210906 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:30.212204 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:30.216692 systemd-logind[1446]: New session 4 of user core. May 13 23:23:30.226330 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:23:30.278529 sshd[1590]: Connection closed by 10.0.0.1 port 40654 May 13 23:23:30.278969 sshd-session[1587]: pam_unix(sshd:session): session closed for user core May 13 23:23:30.292281 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:40654.service: Deactivated successfully. May 13 23:23:30.293761 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:23:30.294980 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. May 13 23:23:30.303417 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:40656.service - OpenSSH per-connection server daemon (10.0.0.1:40656). May 13 23:23:30.304351 systemd-logind[1446]: Removed session 4. May 13 23:23:30.343080 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 40656 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:30.344529 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:30.348895 systemd-logind[1446]: New session 5 of user core. May 13 23:23:30.359360 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:23:30.417276 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 23:23:30.418313 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:23:30.431992 sudo[1599]: pam_unix(sudo:session): session closed for user root May 13 23:23:30.434112 sshd[1598]: Connection closed by 10.0.0.1 port 40656 May 13 23:23:30.433976 sshd-session[1595]: pam_unix(sshd:session): session closed for user core May 13 23:23:30.444330 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:40656.service: Deactivated successfully. May 13 23:23:30.445835 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:23:30.447346 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. May 13 23:23:30.449188 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:40666.service - OpenSSH per-connection server daemon (10.0.0.1:40666). May 13 23:23:30.450023 systemd-logind[1446]: Removed session 5. May 13 23:23:30.492230 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:30.493547 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:30.499594 systemd-logind[1446]: New session 6 of user core. May 13 23:23:30.505311 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:23:30.558295 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 23:23:30.559311 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:23:30.564323 sudo[1609]: pam_unix(sudo:session): session closed for user root May 13 23:23:30.569205 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 23:23:30.569472 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:23:30.593876 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:23:30.617385 augenrules[1631]: No rules May 13 23:23:30.618588 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:23:30.618803 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:23:30.620119 sudo[1608]: pam_unix(sudo:session): session closed for user root May 13 23:23:30.621328 sshd[1607]: Connection closed by 10.0.0.1 port 40666 May 13 23:23:30.621727 sshd-session[1604]: pam_unix(sshd:session): session closed for user core May 13 23:23:30.633549 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:40666.service: Deactivated successfully. May 13 23:23:30.634895 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:23:30.635499 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. May 13 23:23:30.645526 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:40680.service - OpenSSH per-connection server daemon (10.0.0.1:40680). May 13 23:23:30.646603 systemd-logind[1446]: Removed session 6. May 13 23:23:30.683767 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:nklxXyWg08rxtyckaZxQtXQofegmoqb8BlqxAIMDaTw May 13 23:23:30.685072 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:23:30.689574 systemd-logind[1446]: New session 7 of user core. May 13 23:23:30.699308 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:23:30.750932 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:23:30.751230 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:23:30.772509 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 23:23:30.788445 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 23:23:30.789252 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 23:23:31.291878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:23:31.292208 systemd[1]: kubelet.service: Consumed 820ms CPU time, 245.4M memory peak. May 13 23:23:31.300399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:23:31.315071 systemd[1]: Reload requested from client PID 1692 ('systemctl') (unit session-7.scope)... May 13 23:23:31.315087 systemd[1]: Reloading... May 13 23:23:31.381162 zram_generator::config[1733]: No configuration found. May 13 23:23:31.559313 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:23:31.631167 systemd[1]: Reloading finished in 315 ms. May 13 23:23:31.664933 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:23:31.667691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:23:31.668245 systemd[1]: kubelet.service: Deactivated successfully. May 13 23:23:31.668448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:23:31.668483 systemd[1]: kubelet.service: Consumed 77ms CPU time, 82.4M memory peak. May 13 23:23:31.669799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:23:31.763416 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:23:31.766210 (kubelet)[1782]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 23:23:31.803542 kubelet[1782]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:23:31.803542 kubelet[1782]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 23:23:31.803542 kubelet[1782]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 23:23:31.804449 kubelet[1782]: I0513 23:23:31.804396 1782 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 23:23:32.530527 kubelet[1782]: I0513 23:23:32.530490 1782 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 23:23:32.531294 kubelet[1782]: I0513 23:23:32.530704 1782 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 23:23:32.531294 kubelet[1782]: I0513 23:23:32.530926 1782 server.go:927] "Client rotation is on, will bootstrap in background" May 13 23:23:32.567181 kubelet[1782]: I0513 23:23:32.567074 1782 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 23:23:32.579206 kubelet[1782]: I0513 23:23:32.579169 1782 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 23:23:32.579895 kubelet[1782]: I0513 23:23:32.579618 1782 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 23:23:32.579895 kubelet[1782]: I0513 23:23:32.579656 1782 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.24","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 23:23:32.579895 kubelet[1782]: I0513 23:23:32.579883 1782 topology_manager.go:138] "Creating topology manager with none policy" May 13 23:23:32.579895 kubelet[1782]: I0513 23:23:32.579894 1782 container_manager_linux.go:301] "Creating device plugin manager" May 13 23:23:32.580231 kubelet[1782]: I0513 23:23:32.580197 1782 state_mem.go:36] "Initialized new in-memory state store" May 13 23:23:32.581651 kubelet[1782]: I0513 23:23:32.581621 1782 kubelet.go:400] "Attempting to sync node with API server" May 13 23:23:32.581696 kubelet[1782]: I0513 23:23:32.581686 1782 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 23:23:32.581896 kubelet[1782]: I0513 23:23:32.581870 1782 kubelet.go:312] "Adding apiserver pod source" May 13 23:23:32.581896 kubelet[1782]: I0513 23:23:32.581891 1782 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 23:23:32.582644 kubelet[1782]: E0513 23:23:32.582102 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:32.582644 kubelet[1782]: E0513 23:23:32.582104 1782 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:32.585096 kubelet[1782]: I0513 23:23:32.585055 1782 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 23:23:32.585475 kubelet[1782]: I0513 23:23:32.585461 1782 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 23:23:32.585573 kubelet[1782]: W0513 23:23:32.585561 1782 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 23:23:32.586442 kubelet[1782]: I0513 23:23:32.586356 1782 server.go:1264] "Started kubelet" May 13 23:23:32.587001 kubelet[1782]: I0513 23:23:32.586932 1782 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 23:23:32.588731 kubelet[1782]: I0513 23:23:32.587256 1782 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 23:23:32.588731 kubelet[1782]: I0513 23:23:32.587300 1782 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 23:23:32.588731 kubelet[1782]: I0513 23:23:32.587473 1782 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 23:23:32.588731 kubelet[1782]: I0513 23:23:32.588429 1782 server.go:455] "Adding debug handlers to kubelet server" May 13 23:23:32.589383 kubelet[1782]: I0513 23:23:32.589368 1782 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 23:23:32.589565 kubelet[1782]: I0513 23:23:32.589549 1782 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 23:23:32.590218 kubelet[1782]: I0513 23:23:32.589772 1782 reconciler.go:26] "Reconciler: start to sync state" May 13 23:23:32.590971 kubelet[1782]: I0513 23:23:32.590941 1782 factory.go:221] Registration of the systemd container factory successfully May 13 23:23:32.591084 kubelet[1782]: I0513 23:23:32.591059 1782 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 23:23:32.596666 kubelet[1782]: I0513 23:23:32.594685 1782 factory.go:221] Registration of the containerd container factory successfully May 13 23:23:32.596666 kubelet[1782]: E0513 23:23:32.592308 1782 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.24.183f39b0f25e3a4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.24,UID:10.0.0.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.24,},FirstTimestamp:2025-05-13 23:23:32.586330703 +0000 UTC m=+0.817091135,LastTimestamp:2025-05-13 23:23:32.586330703 +0000 UTC m=+0.817091135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.24,}" May 13 23:23:32.599754 kubelet[1782]: E0513 23:23:32.599721 1782 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 23:23:32.603129 kubelet[1782]: E0513 23:23:32.603100 1782 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.24\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 23:23:32.603382 kubelet[1782]: W0513 23:23:32.603359 1782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 23:23:32.603475 kubelet[1782]: E0513 23:23:32.603463 1782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 23:23:32.603609 kubelet[1782]: W0513 23:23:32.603595 1782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 23:23:32.603680 kubelet[1782]: E0513 23:23:32.603666 1782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.24" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 23:23:32.603752 kubelet[1782]: W0513 23:23:32.603686 1782 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 23:23:32.603811 kubelet[1782]: E0513 23:23:32.603802 1782 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 23:23:32.605300 kubelet[1782]: E0513 23:23:32.605207 1782 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.24.183f39b0f32a591d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.24,UID:10.0.0.24,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.24,},FirstTimestamp:2025-05-13 23:23:32.599707933 +0000 UTC m=+0.830468365,LastTimestamp:2025-05-13 23:23:32.599707933 +0000 UTC m=+0.830468365,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.24,}" May 13 23:23:32.605539 kubelet[1782]: I0513 23:23:32.605526 1782 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 23:23:32.605632 kubelet[1782]: I0513 23:23:32.605621 1782 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 23:23:32.605711 kubelet[1782]: I0513 23:23:32.605702 1782 state_mem.go:36] "Initialized new in-memory state store" May 13 23:23:32.671297 kubelet[1782]: I0513 23:23:32.671260 1782 policy_none.go:49] "None policy: Start" May 13 23:23:32.672018 kubelet[1782]: I0513 23:23:32.671996 1782 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 23:23:32.672074 kubelet[1782]: I0513 23:23:32.672028 1782 state_mem.go:35] "Initializing new in-memory state store" May 13 23:23:32.680046 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 23:23:32.690706 kubelet[1782]: I0513 23:23:32.690678 1782 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.24" May 13 23:23:32.695059 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 23:23:32.695315 kubelet[1782]: I0513 23:23:32.695278 1782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 23:23:32.696862 kubelet[1782]: I0513 23:23:32.696842 1782 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 23:23:32.697050 kubelet[1782]: I0513 23:23:32.697037 1782 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 23:23:32.697116 kubelet[1782]: I0513 23:23:32.697106 1782 kubelet.go:2337] "Starting kubelet main sync loop" May 13 23:23:32.697253 kubelet[1782]: E0513 23:23:32.697235 1782 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 23:23:32.697331 kubelet[1782]: I0513 23:23:32.697253 1782 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.24" May 13 23:23:32.699316 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 23:23:32.709213 kubelet[1782]: I0513 23:23:32.709174 1782 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 23:23:32.709435 kubelet[1782]: I0513 23:23:32.709387 1782 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 23:23:32.709515 kubelet[1782]: I0513 23:23:32.709496 1782 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 23:23:32.710548 kubelet[1782]: E0513 23:23:32.710507 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:32.710983 kubelet[1782]: E0513 23:23:32.710964 1782 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.24\" not found" May 13 23:23:32.783760 sudo[1643]: pam_unix(sudo:session): session closed for user root May 13 23:23:32.785547 sshd[1642]: Connection closed by 10.0.0.1 port 40680 May 13 23:23:32.785877 sshd-session[1639]: pam_unix(sshd:session): session closed for user core May 13 23:23:32.788906 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:40680.service: Deactivated successfully. May 13 23:23:32.791514 systemd[1]: session-7.scope: Deactivated successfully. May 13 23:23:32.791708 systemd[1]: session-7.scope: Consumed 459ms CPU time, 108M memory peak. May 13 23:23:32.792705 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. May 13 23:23:32.793459 systemd-logind[1446]: Removed session 7. May 13 23:23:32.810908 kubelet[1782]: E0513 23:23:32.810869 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:32.911805 kubelet[1782]: E0513 23:23:32.911761 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.011964 kubelet[1782]: E0513 23:23:33.011901 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.112260 kubelet[1782]: E0513 23:23:33.112160 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.212827 kubelet[1782]: E0513 23:23:33.212798 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.313656 kubelet[1782]: E0513 23:23:33.313625 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.416050 kubelet[1782]: E0513 23:23:33.415921 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.516070 kubelet[1782]: E0513 23:23:33.516009 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.533275 kubelet[1782]: I0513 23:23:33.533233 1782 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 23:23:33.533537 kubelet[1782]: W0513 23:23:33.533497 1782 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 23:23:33.533599 kubelet[1782]: W0513 23:23:33.533508 1782 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 23:23:33.582870 kubelet[1782]: E0513 23:23:33.582824 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:33.616487 kubelet[1782]: E0513 23:23:33.616450 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.717597 kubelet[1782]: E0513 23:23:33.717484 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.818487 kubelet[1782]: E0513 23:23:33.818443 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:33.919327 kubelet[1782]: E0513 23:23:33.919295 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:34.020355 kubelet[1782]: E0513 23:23:34.020227 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:34.120716 kubelet[1782]: E0513 23:23:34.120668 1782 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.24\" not found" May 13 23:23:34.222068 kubelet[1782]: I0513 23:23:34.222038 1782 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 23:23:34.222534 containerd[1467]: time="2025-05-13T23:23:34.222350496Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 23:23:34.222813 kubelet[1782]: I0513 23:23:34.222547 1782 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 23:23:34.583730 kubelet[1782]: E0513 23:23:34.583681 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:34.583730 kubelet[1782]: I0513 23:23:34.583703 1782 apiserver.go:52] "Watching apiserver" May 13 23:23:34.592756 kubelet[1782]: I0513 23:23:34.592684 1782 topology_manager.go:215] "Topology Admit Handler" podUID="c2395312-14f6-4158-9264-a018465c6e9a" podNamespace="calico-system" podName="calico-node-g9kqs" May 13 23:23:34.592884 kubelet[1782]: I0513 23:23:34.592784 1782 topology_manager.go:215] "Topology Admit Handler" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" podNamespace="calico-system" podName="csi-node-driver-tvqd8" May 13 23:23:34.592884 kubelet[1782]: I0513 23:23:34.592857 1782 topology_manager.go:215] "Topology Admit Handler" podUID="facf3b50-edcb-457e-8bf2-a0ea5220a243" podNamespace="kube-system" podName="kube-proxy-br7ps" May 13 23:23:34.593437 kubelet[1782]: E0513 23:23:34.593379 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:34.600614 systemd[1]: Created slice kubepods-besteffort-podc2395312_14f6_4158_9264_a018465c6e9a.slice - libcontainer container kubepods-besteffort-podc2395312_14f6_4158_9264_a018465c6e9a.slice. May 13 23:23:34.616559 systemd[1]: Created slice kubepods-besteffort-podfacf3b50_edcb_457e_8bf2_a0ea5220a243.slice - libcontainer container kubepods-besteffort-podfacf3b50_edcb_457e_8bf2_a0ea5220a243.slice. May 13 23:23:34.690970 kubelet[1782]: I0513 23:23:34.690907 1782 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 23:23:34.700951 kubelet[1782]: I0513 23:23:34.700915 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-var-run-calico\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.700951 kubelet[1782]: I0513 23:23:34.700951 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cdfb60ad-a39c-4082-86ea-908d79fa0d73-socket-dir\") pod \"csi-node-driver-tvqd8\" (UID: \"cdfb60ad-a39c-4082-86ea-908d79fa0d73\") " pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:34.701055 kubelet[1782]: I0513 23:23:34.700977 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rd7g\" (UniqueName: \"kubernetes.io/projected/cdfb60ad-a39c-4082-86ea-908d79fa0d73-kube-api-access-6rd7g\") pod \"csi-node-driver-tvqd8\" (UID: \"cdfb60ad-a39c-4082-86ea-908d79fa0d73\") " pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:34.701055 kubelet[1782]: I0513 23:23:34.700999 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/facf3b50-edcb-457e-8bf2-a0ea5220a243-xtables-lock\") pod \"kube-proxy-br7ps\" (UID: \"facf3b50-edcb-457e-8bf2-a0ea5220a243\") " pod="kube-system/kube-proxy-br7ps" May 13 23:23:34.701055 kubelet[1782]: I0513 23:23:34.701015 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/facf3b50-edcb-457e-8bf2-a0ea5220a243-lib-modules\") pod \"kube-proxy-br7ps\" (UID: \"facf3b50-edcb-457e-8bf2-a0ea5220a243\") " pod="kube-system/kube-proxy-br7ps" May 13 23:23:34.701055 kubelet[1782]: I0513 23:23:34.701033 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdfb60ad-a39c-4082-86ea-908d79fa0d73-kubelet-dir\") pod \"csi-node-driver-tvqd8\" (UID: \"cdfb60ad-a39c-4082-86ea-908d79fa0d73\") " pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:34.701055 kubelet[1782]: I0513 23:23:34.701048 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-lib-modules\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701179 kubelet[1782]: I0513 23:23:34.701063 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-policysync\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701179 kubelet[1782]: I0513 23:23:34.701077 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-cni-net-dir\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701179 kubelet[1782]: I0513 23:23:34.701092 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-flexvol-driver-host\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701179 kubelet[1782]: I0513 23:23:34.701112 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw5n2\" (UniqueName: \"kubernetes.io/projected/c2395312-14f6-4158-9264-a018465c6e9a-kube-api-access-pw5n2\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701179 kubelet[1782]: I0513 23:23:34.701128 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-cni-log-dir\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701285 kubelet[1782]: I0513 23:23:34.701160 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7sfr\" (UniqueName: \"kubernetes.io/projected/facf3b50-edcb-457e-8bf2-a0ea5220a243-kube-api-access-q7sfr\") pod \"kube-proxy-br7ps\" (UID: \"facf3b50-edcb-457e-8bf2-a0ea5220a243\") " pod="kube-system/kube-proxy-br7ps" May 13 23:23:34.701285 kubelet[1782]: I0513 23:23:34.701177 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-xtables-lock\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701285 kubelet[1782]: I0513 23:23:34.701192 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2395312-14f6-4158-9264-a018465c6e9a-tigera-ca-bundle\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701285 kubelet[1782]: I0513 23:23:34.701207 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c2395312-14f6-4158-9264-a018465c6e9a-node-certs\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701285 kubelet[1782]: I0513 23:23:34.701224 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-var-lib-calico\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701380 kubelet[1782]: I0513 23:23:34.701240 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c2395312-14f6-4158-9264-a018465c6e9a-cni-bin-dir\") pod \"calico-node-g9kqs\" (UID: \"c2395312-14f6-4158-9264-a018465c6e9a\") " pod="calico-system/calico-node-g9kqs" May 13 23:23:34.701380 kubelet[1782]: I0513 23:23:34.701256 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cdfb60ad-a39c-4082-86ea-908d79fa0d73-varrun\") pod \"csi-node-driver-tvqd8\" (UID: \"cdfb60ad-a39c-4082-86ea-908d79fa0d73\") " pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:34.701380 kubelet[1782]: I0513 23:23:34.701272 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cdfb60ad-a39c-4082-86ea-908d79fa0d73-registration-dir\") pod \"csi-node-driver-tvqd8\" (UID: \"cdfb60ad-a39c-4082-86ea-908d79fa0d73\") " pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:34.701380 kubelet[1782]: I0513 23:23:34.701287 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/facf3b50-edcb-457e-8bf2-a0ea5220a243-kube-proxy\") pod \"kube-proxy-br7ps\" (UID: \"facf3b50-edcb-457e-8bf2-a0ea5220a243\") " pod="kube-system/kube-proxy-br7ps" May 13 23:23:34.803583 kubelet[1782]: E0513 23:23:34.803500 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.803583 kubelet[1782]: W0513 23:23:34.803527 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.803583 kubelet[1782]: E0513 23:23:34.803551 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.803827 kubelet[1782]: E0513 23:23:34.803800 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.803827 kubelet[1782]: W0513 23:23:34.803814 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.803936 kubelet[1782]: E0513 23:23:34.803830 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804030 kubelet[1782]: E0513 23:23:34.803999 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804030 kubelet[1782]: W0513 23:23:34.804010 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804030 kubelet[1782]: E0513 23:23:34.804022 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804215 kubelet[1782]: E0513 23:23:34.804203 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804215 kubelet[1782]: W0513 23:23:34.804213 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804264 kubelet[1782]: E0513 23:23:34.804226 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804386 kubelet[1782]: E0513 23:23:34.804376 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804386 kubelet[1782]: W0513 23:23:34.804386 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804454 kubelet[1782]: E0513 23:23:34.804398 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804546 kubelet[1782]: E0513 23:23:34.804534 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804582 kubelet[1782]: W0513 23:23:34.804546 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804582 kubelet[1782]: E0513 23:23:34.804559 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804702 kubelet[1782]: E0513 23:23:34.804686 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804702 kubelet[1782]: W0513 23:23:34.804696 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804774 kubelet[1782]: E0513 23:23:34.804723 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.804854 kubelet[1782]: E0513 23:23:34.804842 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.804854 kubelet[1782]: W0513 23:23:34.804853 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.804904 kubelet[1782]: E0513 23:23:34.804866 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.805072 kubelet[1782]: E0513 23:23:34.805059 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.805072 kubelet[1782]: W0513 23:23:34.805070 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.805151 kubelet[1782]: E0513 23:23:34.805078 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.807276 kubelet[1782]: E0513 23:23:34.807198 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.807276 kubelet[1782]: W0513 23:23:34.807215 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.807276 kubelet[1782]: E0513 23:23:34.807230 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.817761 kubelet[1782]: E0513 23:23:34.817736 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.817761 kubelet[1782]: W0513 23:23:34.817754 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.817761 kubelet[1782]: E0513 23:23:34.817772 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.819246 kubelet[1782]: E0513 23:23:34.818218 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.819246 kubelet[1782]: W0513 23:23:34.818231 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.819246 kubelet[1782]: E0513 23:23:34.818243 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.820022 kubelet[1782]: E0513 23:23:34.819954 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:34.820022 kubelet[1782]: W0513 23:23:34.819968 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:34.820022 kubelet[1782]: E0513 23:23:34.819992 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:34.916085 containerd[1467]: time="2025-05-13T23:23:34.915961718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9kqs,Uid:c2395312-14f6-4158-9264-a018465c6e9a,Namespace:calico-system,Attempt:0,}" May 13 23:23:34.919420 containerd[1467]: time="2025-05-13T23:23:34.919390634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-br7ps,Uid:facf3b50-edcb-457e-8bf2-a0ea5220a243,Namespace:kube-system,Attempt:0,}" May 13 23:23:35.561201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount94137616.mount: Deactivated successfully. May 13 23:23:35.566611 containerd[1467]: time="2025-05-13T23:23:35.566565794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:23:35.568234 containerd[1467]: time="2025-05-13T23:23:35.568188473Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:23:35.568604 containerd[1467]: time="2025-05-13T23:23:35.568571878Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 23:23:35.569180 containerd[1467]: time="2025-05-13T23:23:35.569149799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 23:23:35.569898 containerd[1467]: time="2025-05-13T23:23:35.569859379Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:23:35.572799 containerd[1467]: time="2025-05-13T23:23:35.572765099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 23:23:35.573746 containerd[1467]: time="2025-05-13T23:23:35.573701628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 654.132036ms" May 13 23:23:35.575931 containerd[1467]: time="2025-05-13T23:23:35.575873091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 659.826001ms" May 13 23:23:35.584203 kubelet[1782]: E0513 23:23:35.584155 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:35.689150 containerd[1467]: time="2025-05-13T23:23:35.689040343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:35.689150 containerd[1467]: time="2025-05-13T23:23:35.689112563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:35.689709 containerd[1467]: time="2025-05-13T23:23:35.689124379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:35.689709 containerd[1467]: time="2025-05-13T23:23:35.688966556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:35.689709 containerd[1467]: time="2025-05-13T23:23:35.689673685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:35.689709 containerd[1467]: time="2025-05-13T23:23:35.689686264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:35.689709 containerd[1467]: time="2025-05-13T23:23:35.689670470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:35.689920 containerd[1467]: time="2025-05-13T23:23:35.689803375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:35.779363 systemd[1]: Started cri-containerd-759516d6d351de3c056bb408e5ad344f8cfc0adafd2ebb0f341136284d56b00f.scope - libcontainer container 759516d6d351de3c056bb408e5ad344f8cfc0adafd2ebb0f341136284d56b00f. May 13 23:23:35.782432 systemd[1]: Started cri-containerd-4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf.scope - libcontainer container 4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf. May 13 23:23:35.799758 containerd[1467]: time="2025-05-13T23:23:35.799703125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-br7ps,Uid:facf3b50-edcb-457e-8bf2-a0ea5220a243,Namespace:kube-system,Attempt:0,} returns sandbox id \"759516d6d351de3c056bb408e5ad344f8cfc0adafd2ebb0f341136284d56b00f\"" May 13 23:23:35.801811 containerd[1467]: time="2025-05-13T23:23:35.801736015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:23:35.804035 containerd[1467]: time="2025-05-13T23:23:35.803973107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g9kqs,Uid:c2395312-14f6-4158-9264-a018465c6e9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\"" May 13 23:23:36.585256 kubelet[1782]: E0513 23:23:36.585119 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:36.697537 kubelet[1782]: E0513 23:23:36.697459 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:36.945986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3154335707.mount: Deactivated successfully. May 13 23:23:37.144804 containerd[1467]: time="2025-05-13T23:23:37.144745605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:37.145949 containerd[1467]: time="2025-05-13T23:23:37.145905029Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 23:23:37.146705 containerd[1467]: time="2025-05-13T23:23:37.146664971Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:37.150602 containerd[1467]: time="2025-05-13T23:23:37.150563398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:37.151205 containerd[1467]: time="2025-05-13T23:23:37.151168743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.349349586s" May 13 23:23:37.151205 containerd[1467]: time="2025-05-13T23:23:37.151202745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 23:23:37.153376 containerd[1467]: time="2025-05-13T23:23:37.153298789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 23:23:37.154016 containerd[1467]: time="2025-05-13T23:23:37.153977157Z" level=info msg="CreateContainer within sandbox \"759516d6d351de3c056bb408e5ad344f8cfc0adafd2ebb0f341136284d56b00f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 23:23:37.168802 containerd[1467]: time="2025-05-13T23:23:37.168754198Z" level=info msg="CreateContainer within sandbox \"759516d6d351de3c056bb408e5ad344f8cfc0adafd2ebb0f341136284d56b00f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a0c0fd0d29796311978f028d13a89d77264aa5b346ea9ba5a53081d6fab9b4f0\"" May 13 23:23:37.169441 containerd[1467]: time="2025-05-13T23:23:37.169392220Z" level=info msg="StartContainer for \"a0c0fd0d29796311978f028d13a89d77264aa5b346ea9ba5a53081d6fab9b4f0\"" May 13 23:23:37.197333 systemd[1]: Started cri-containerd-a0c0fd0d29796311978f028d13a89d77264aa5b346ea9ba5a53081d6fab9b4f0.scope - libcontainer container a0c0fd0d29796311978f028d13a89d77264aa5b346ea9ba5a53081d6fab9b4f0. May 13 23:23:37.222788 containerd[1467]: time="2025-05-13T23:23:37.221783550Z" level=info msg="StartContainer for \"a0c0fd0d29796311978f028d13a89d77264aa5b346ea9ba5a53081d6fab9b4f0\" returns successfully" May 13 23:23:37.585388 kubelet[1782]: E0513 23:23:37.585267 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:37.712179 kubelet[1782]: E0513 23:23:37.712023 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.712179 kubelet[1782]: W0513 23:23:37.712095 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.712179 kubelet[1782]: E0513 23:23:37.712117 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.712343 kubelet[1782]: E0513 23:23:37.712317 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.712343 kubelet[1782]: W0513 23:23:37.712326 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.712343 kubelet[1782]: E0513 23:23:37.712334 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.712555 kubelet[1782]: E0513 23:23:37.712520 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.712555 kubelet[1782]: W0513 23:23:37.712545 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.712608 kubelet[1782]: E0513 23:23:37.712584 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.712788 kubelet[1782]: E0513 23:23:37.712766 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.712788 kubelet[1782]: W0513 23:23:37.712780 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.712836 kubelet[1782]: E0513 23:23:37.712789 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.712986 kubelet[1782]: E0513 23:23:37.712970 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713009 kubelet[1782]: W0513 23:23:37.712987 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713009 kubelet[1782]: E0513 23:23:37.712997 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713230 kubelet[1782]: E0513 23:23:37.713214 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713230 kubelet[1782]: W0513 23:23:37.713227 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713280 kubelet[1782]: E0513 23:23:37.713236 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713390 kubelet[1782]: E0513 23:23:37.713372 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713390 kubelet[1782]: W0513 23:23:37.713388 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713430 kubelet[1782]: E0513 23:23:37.713396 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713551 kubelet[1782]: E0513 23:23:37.713539 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713551 kubelet[1782]: W0513 23:23:37.713550 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713595 kubelet[1782]: E0513 23:23:37.713557 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713701 kubelet[1782]: E0513 23:23:37.713689 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713722 kubelet[1782]: W0513 23:23:37.713703 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713722 kubelet[1782]: E0513 23:23:37.713711 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713839 kubelet[1782]: E0513 23:23:37.713829 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.713863 kubelet[1782]: W0513 23:23:37.713843 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.713863 kubelet[1782]: E0513 23:23:37.713851 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.713976 kubelet[1782]: E0513 23:23:37.713967 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714005 kubelet[1782]: W0513 23:23:37.713980 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714005 kubelet[1782]: E0513 23:23:37.713996 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714174 kubelet[1782]: E0513 23:23:37.714162 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714174 kubelet[1782]: W0513 23:23:37.714173 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714233 kubelet[1782]: E0513 23:23:37.714180 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714337 kubelet[1782]: E0513 23:23:37.714320 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714337 kubelet[1782]: W0513 23:23:37.714335 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714380 kubelet[1782]: E0513 23:23:37.714345 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714477 kubelet[1782]: E0513 23:23:37.714465 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714507 kubelet[1782]: W0513 23:23:37.714480 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714507 kubelet[1782]: E0513 23:23:37.714488 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714627 kubelet[1782]: E0513 23:23:37.714611 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714627 kubelet[1782]: W0513 23:23:37.714625 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714668 kubelet[1782]: E0513 23:23:37.714632 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714766 kubelet[1782]: E0513 23:23:37.714756 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714787 kubelet[1782]: W0513 23:23:37.714770 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.714787 kubelet[1782]: E0513 23:23:37.714778 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.714945 kubelet[1782]: E0513 23:23:37.714932 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.714945 kubelet[1782]: W0513 23:23:37.714942 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.715000 kubelet[1782]: E0513 23:23:37.714949 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.715098 kubelet[1782]: E0513 23:23:37.715085 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.715123 kubelet[1782]: W0513 23:23:37.715099 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.715123 kubelet[1782]: E0513 23:23:37.715106 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.715251 kubelet[1782]: E0513 23:23:37.715241 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.715272 kubelet[1782]: W0513 23:23:37.715254 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.715272 kubelet[1782]: E0513 23:23:37.715261 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.715389 kubelet[1782]: E0513 23:23:37.715378 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.715412 kubelet[1782]: W0513 23:23:37.715392 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.715412 kubelet[1782]: E0513 23:23:37.715400 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.719057 kubelet[1782]: I0513 23:23:37.718994 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-br7ps" podStartSLOduration=4.368154289 podStartE2EDuration="5.718981205s" podCreationTimestamp="2025-05-13 23:23:32 +0000 UTC" firstStartedPulling="2025-05-13 23:23:35.801301208 +0000 UTC m=+4.032061599" lastFinishedPulling="2025-05-13 23:23:37.152128084 +0000 UTC m=+5.382888515" observedRunningTime="2025-05-13 23:23:37.718176381 +0000 UTC m=+5.948936812" watchObservedRunningTime="2025-05-13 23:23:37.718981205 +0000 UTC m=+5.949741636" May 13 23:23:37.720221 kubelet[1782]: E0513 23:23:37.720201 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.720221 kubelet[1782]: W0513 23:23:37.720221 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.720302 kubelet[1782]: E0513 23:23:37.720235 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.720430 kubelet[1782]: E0513 23:23:37.720418 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.720430 kubelet[1782]: W0513 23:23:37.720429 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.720480 kubelet[1782]: E0513 23:23:37.720445 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.720702 kubelet[1782]: E0513 23:23:37.720684 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.720735 kubelet[1782]: W0513 23:23:37.720703 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.720735 kubelet[1782]: E0513 23:23:37.720722 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.720882 kubelet[1782]: E0513 23:23:37.720871 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.720882 kubelet[1782]: W0513 23:23:37.720882 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.720932 kubelet[1782]: E0513 23:23:37.720895 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.721128 kubelet[1782]: E0513 23:23:37.721116 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.721165 kubelet[1782]: W0513 23:23:37.721128 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.721165 kubelet[1782]: E0513 23:23:37.721156 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.721338 kubelet[1782]: E0513 23:23:37.721326 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.721338 kubelet[1782]: W0513 23:23:37.721337 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.721396 kubelet[1782]: E0513 23:23:37.721350 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.721619 kubelet[1782]: E0513 23:23:37.721602 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.721619 kubelet[1782]: W0513 23:23:37.721617 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.721687 kubelet[1782]: E0513 23:23:37.721634 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.721801 kubelet[1782]: E0513 23:23:37.721790 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.721829 kubelet[1782]: W0513 23:23:37.721802 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.721829 kubelet[1782]: E0513 23:23:37.721815 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.721990 kubelet[1782]: E0513 23:23:37.721980 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.721990 kubelet[1782]: W0513 23:23:37.721989 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.722057 kubelet[1782]: E0513 23:23:37.722001 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.722210 kubelet[1782]: E0513 23:23:37.722199 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.722210 kubelet[1782]: W0513 23:23:37.722211 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.722272 kubelet[1782]: E0513 23:23:37.722225 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.722488 kubelet[1782]: E0513 23:23:37.722468 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.722569 kubelet[1782]: W0513 23:23:37.722556 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.722637 kubelet[1782]: E0513 23:23:37.722625 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:37.722842 kubelet[1782]: E0513 23:23:37.722828 1782 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 23:23:37.722842 kubelet[1782]: W0513 23:23:37.722842 1782 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 23:23:37.722891 kubelet[1782]: E0513 23:23:37.722851 1782 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 23:23:38.203919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207937219.mount: Deactivated successfully. May 13 23:23:38.258930 containerd[1467]: time="2025-05-13T23:23:38.258868138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:38.259298 containerd[1467]: time="2025-05-13T23:23:38.259252673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 13 23:23:38.260149 containerd[1467]: time="2025-05-13T23:23:38.260100671Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:38.262028 containerd[1467]: time="2025-05-13T23:23:38.261991002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:38.262994 containerd[1467]: time="2025-05-13T23:23:38.262949631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.109614234s" May 13 23:23:38.263062 containerd[1467]: time="2025-05-13T23:23:38.262990720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 23:23:38.265159 containerd[1467]: time="2025-05-13T23:23:38.265119605Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 23:23:38.276303 containerd[1467]: time="2025-05-13T23:23:38.276251087Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432\"" May 13 23:23:38.276810 containerd[1467]: time="2025-05-13T23:23:38.276760255Z" level=info msg="StartContainer for \"89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432\"" May 13 23:23:38.301313 systemd[1]: Started cri-containerd-89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432.scope - libcontainer container 89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432. May 13 23:23:38.323253 containerd[1467]: time="2025-05-13T23:23:38.323206329Z" level=info msg="StartContainer for \"89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432\" returns successfully" May 13 23:23:38.344158 systemd[1]: cri-containerd-89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432.scope: Deactivated successfully. May 13 23:23:38.513112 containerd[1467]: time="2025-05-13T23:23:38.512713180Z" level=info msg="shim disconnected" id=89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432 namespace=k8s.io May 13 23:23:38.513112 containerd[1467]: time="2025-05-13T23:23:38.512772408Z" level=warning msg="cleaning up after shim disconnected" id=89386a477964a0015cd575c8bafdf9bf7f75983b482f9151e2cdc53b542fd432 namespace=k8s.io May 13 23:23:38.513112 containerd[1467]: time="2025-05-13T23:23:38.512783683Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:23:38.585431 kubelet[1782]: E0513 23:23:38.585375 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:38.697904 kubelet[1782]: E0513 23:23:38.697467 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:38.716852 containerd[1467]: time="2025-05-13T23:23:38.716632639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 23:23:39.585841 kubelet[1782]: E0513 23:23:39.585781 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:40.586391 kubelet[1782]: E0513 23:23:40.586340 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:40.697541 kubelet[1782]: E0513 23:23:40.697502 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:40.704156 containerd[1467]: time="2025-05-13T23:23:40.704055995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:40.705290 containerd[1467]: time="2025-05-13T23:23:40.705219250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 23:23:40.706073 containerd[1467]: time="2025-05-13T23:23:40.706044487Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:40.707932 containerd[1467]: time="2025-05-13T23:23:40.707865174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:40.708965 containerd[1467]: time="2025-05-13T23:23:40.708929029Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 1.992261726s" May 13 23:23:40.709013 containerd[1467]: time="2025-05-13T23:23:40.708963713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 23:23:40.710937 containerd[1467]: time="2025-05-13T23:23:40.710894826Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 23:23:40.721153 containerd[1467]: time="2025-05-13T23:23:40.721044671Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839\"" May 13 23:23:40.721479 containerd[1467]: time="2025-05-13T23:23:40.721454623Z" level=info msg="StartContainer for \"5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839\"" May 13 23:23:40.749297 systemd[1]: Started cri-containerd-5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839.scope - libcontainer container 5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839. May 13 23:23:40.784708 containerd[1467]: time="2025-05-13T23:23:40.784655144Z" level=info msg="StartContainer for \"5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839\" returns successfully" May 13 23:23:41.268501 containerd[1467]: time="2025-05-13T23:23:41.268444330Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:23:41.270198 systemd[1]: cri-containerd-5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839.scope: Deactivated successfully. May 13 23:23:41.270555 systemd[1]: cri-containerd-5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839.scope: Consumed 497ms CPU time, 169.7M memory peak, 150.3M written to disk. May 13 23:23:41.367604 kubelet[1782]: I0513 23:23:41.367571 1782 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 23:23:41.512332 containerd[1467]: time="2025-05-13T23:23:41.512114228Z" level=info msg="shim disconnected" id=5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839 namespace=k8s.io May 13 23:23:41.512332 containerd[1467]: time="2025-05-13T23:23:41.512202134Z" level=warning msg="cleaning up after shim disconnected" id=5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839 namespace=k8s.io May 13 23:23:41.512332 containerd[1467]: time="2025-05-13T23:23:41.512210712Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 23:23:41.586614 kubelet[1782]: E0513 23:23:41.586485 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:41.718200 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ce469f5d01686c8b78e573f47133007e89c2eec4ae8b6c809121bbb69300839-rootfs.mount: Deactivated successfully. May 13 23:23:41.722015 containerd[1467]: time="2025-05-13T23:23:41.721964845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 23:23:42.586887 kubelet[1782]: E0513 23:23:42.586842 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:42.708070 systemd[1]: Created slice kubepods-besteffort-podcdfb60ad_a39c_4082_86ea_908d79fa0d73.slice - libcontainer container kubepods-besteffort-podcdfb60ad_a39c_4082_86ea_908d79fa0d73.slice. May 13 23:23:42.710443 containerd[1467]: time="2025-05-13T23:23:42.710393099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:0,}" May 13 23:23:42.833587 containerd[1467]: time="2025-05-13T23:23:42.833532285Z" level=error msg="Failed to destroy network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:42.835114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54-shm.mount: Deactivated successfully. May 13 23:23:42.835846 containerd[1467]: time="2025-05-13T23:23:42.835633500Z" level=error msg="encountered an error cleaning up failed sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:42.835846 containerd[1467]: time="2025-05-13T23:23:42.835712006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:42.836193 kubelet[1782]: E0513 23:23:42.835989 1782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:42.836193 kubelet[1782]: E0513 23:23:42.836073 1782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:42.836193 kubelet[1782]: E0513 23:23:42.836093 1782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:42.836374 kubelet[1782]: E0513 23:23:42.836148 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:43.587651 kubelet[1782]: E0513 23:23:43.587557 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:43.726042 kubelet[1782]: I0513 23:23:43.725994 1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54" May 13 23:23:43.726619 containerd[1467]: time="2025-05-13T23:23:43.726581705Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\"" May 13 23:23:43.727040 containerd[1467]: time="2025-05-13T23:23:43.726866648Z" level=info msg="Ensure that sandbox 4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54 in task-service has been cleanup successfully" May 13 23:23:43.728382 systemd[1]: run-netns-cni\x2dc89dc8c9\x2de6b3\x2dfb2f\x2d6640\x2d507d4143d95a.mount: Deactivated successfully. May 13 23:23:43.728824 containerd[1467]: time="2025-05-13T23:23:43.728701064Z" level=info msg="TearDown network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" successfully" May 13 23:23:43.728824 containerd[1467]: time="2025-05-13T23:23:43.728730632Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" returns successfully" May 13 23:23:43.729953 containerd[1467]: time="2025-05-13T23:23:43.729891155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:1,}" May 13 23:23:43.804839 containerd[1467]: time="2025-05-13T23:23:43.804782599Z" level=error msg="Failed to destroy network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:43.805452 containerd[1467]: time="2025-05-13T23:23:43.805252562Z" level=error msg="encountered an error cleaning up failed sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:43.805452 containerd[1467]: time="2025-05-13T23:23:43.805332492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:43.805819 kubelet[1782]: E0513 23:23:43.805738 1782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:43.805819 kubelet[1782]: E0513 23:23:43.805814 1782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:43.806596 kubelet[1782]: E0513 23:23:43.805928 1782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:43.806596 kubelet[1782]: E0513 23:23:43.805984 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:43.806237 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e-shm.mount: Deactivated successfully. May 13 23:23:43.971721 kubelet[1782]: I0513 23:23:43.971577 1782 topology_manager.go:215] "Topology Admit Handler" podUID="4edf6823-69a2-46ad-8ef1-8d03b33b7843" podNamespace="default" podName="nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:43.978018 systemd[1]: Created slice kubepods-besteffort-pod4edf6823_69a2_46ad_8ef1_8d03b33b7843.slice - libcontainer container kubepods-besteffort-pod4edf6823_69a2_46ad_8ef1_8d03b33b7843.slice. May 13 23:23:44.059731 kubelet[1782]: I0513 23:23:44.059674 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd99f\" (UniqueName: \"kubernetes.io/projected/4edf6823-69a2-46ad-8ef1-8d03b33b7843-kube-api-access-zd99f\") pod \"nginx-deployment-85f456d6dd-2wgmf\" (UID: \"4edf6823-69a2-46ad-8ef1-8d03b33b7843\") " pod="default/nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:44.281675 containerd[1467]: time="2025-05-13T23:23:44.281548915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:0,}" May 13 23:23:44.466250 containerd[1467]: time="2025-05-13T23:23:44.466194054Z" level=error msg="Failed to destroy network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.466565 containerd[1467]: time="2025-05-13T23:23:44.466518675Z" level=error msg="encountered an error cleaning up failed sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.466604 containerd[1467]: time="2025-05-13T23:23:44.466583887Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.466843 kubelet[1782]: E0513 23:23:44.466804 1782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.466938 kubelet[1782]: E0513 23:23:44.466861 1782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:44.466938 kubelet[1782]: E0513 23:23:44.466880 1782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:44.466938 kubelet[1782]: E0513 23:23:44.466923 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-2wgmf_default(4edf6823-69a2-46ad-8ef1-8d03b33b7843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-2wgmf_default(4edf6823-69a2-46ad-8ef1-8d03b33b7843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-2wgmf" podUID="4edf6823-69a2-46ad-8ef1-8d03b33b7843" May 13 23:23:44.588710 kubelet[1782]: E0513 23:23:44.588574 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:44.602444 containerd[1467]: time="2025-05-13T23:23:44.602404609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:44.603159 containerd[1467]: time="2025-05-13T23:23:44.602962722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 23:23:44.603933 containerd[1467]: time="2025-05-13T23:23:44.603875899Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:44.605807 containerd[1467]: time="2025-05-13T23:23:44.605779282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:44.606425 containerd[1467]: time="2025-05-13T23:23:44.606346728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.884333266s" May 13 23:23:44.606425 containerd[1467]: time="2025-05-13T23:23:44.606377091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 23:23:44.612524 containerd[1467]: time="2025-05-13T23:23:44.612493337Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 23:23:44.625294 containerd[1467]: time="2025-05-13T23:23:44.625223775Z" level=info msg="CreateContainer within sandbox \"4cc423f908725a27add7b2d9f4cebf3fabbe3d60c57679df07ff52ceece786bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f3b00898cb943f03f47f02d77b46f78cf8d0ac62ed442ba2d5acbe577d1388c2\"" May 13 23:23:44.625917 containerd[1467]: time="2025-05-13T23:23:44.625701614Z" level=info msg="StartContainer for \"f3b00898cb943f03f47f02d77b46f78cf8d0ac62ed442ba2d5acbe577d1388c2\"" May 13 23:23:44.651331 systemd[1]: Started cri-containerd-f3b00898cb943f03f47f02d77b46f78cf8d0ac62ed442ba2d5acbe577d1388c2.scope - libcontainer container f3b00898cb943f03f47f02d77b46f78cf8d0ac62ed442ba2d5acbe577d1388c2. May 13 23:23:44.677580 containerd[1467]: time="2025-05-13T23:23:44.677537587Z" level=info msg="StartContainer for \"f3b00898cb943f03f47f02d77b46f78cf8d0ac62ed442ba2d5acbe577d1388c2\" returns successfully" May 13 23:23:44.730617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4216406224.mount: Deactivated successfully. May 13 23:23:44.736408 kubelet[1782]: I0513 23:23:44.736367 1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e" May 13 23:23:44.736906 containerd[1467]: time="2025-05-13T23:23:44.736867082Z" level=info msg="StopPodSandbox for \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\"" May 13 23:23:44.737154 containerd[1467]: time="2025-05-13T23:23:44.737043052Z" level=info msg="Ensure that sandbox 629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e in task-service has been cleanup successfully" May 13 23:23:44.737455 containerd[1467]: time="2025-05-13T23:23:44.737293007Z" level=info msg="TearDown network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\" successfully" May 13 23:23:44.737455 containerd[1467]: time="2025-05-13T23:23:44.737309711Z" level=info msg="StopPodSandbox for \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\" returns successfully" May 13 23:23:44.738820 containerd[1467]: time="2025-05-13T23:23:44.738675491Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\"" May 13 23:23:44.738884 containerd[1467]: time="2025-05-13T23:23:44.738828908Z" level=info msg="TearDown network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" successfully" May 13 23:23:44.738884 containerd[1467]: time="2025-05-13T23:23:44.738843089Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" returns successfully" May 13 23:23:44.739601 kubelet[1782]: I0513 23:23:44.739567 1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd" May 13 23:23:44.739930 containerd[1467]: time="2025-05-13T23:23:44.739839023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:2,}" May 13 23:23:44.740265 systemd[1]: run-netns-cni\x2dc415536a\x2d8350\x2d67d3\x2deaea\x2d75ee298c9e96.mount: Deactivated successfully. May 13 23:23:44.740517 containerd[1467]: time="2025-05-13T23:23:44.740290344Z" level=info msg="StopPodSandbox for \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\"" May 13 23:23:44.740517 containerd[1467]: time="2025-05-13T23:23:44.740489467Z" level=info msg="Ensure that sandbox 830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd in task-service has been cleanup successfully" May 13 23:23:44.741863 systemd[1]: run-netns-cni\x2d86eef684\x2d89d7\x2d1096\x2dbe17\x2d0d7af559a0c6.mount: Deactivated successfully. May 13 23:23:44.742016 containerd[1467]: time="2025-05-13T23:23:44.741871269Z" level=info msg="TearDown network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\" successfully" May 13 23:23:44.742408 containerd[1467]: time="2025-05-13T23:23:44.742387562Z" level=info msg="StopPodSandbox for \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\" returns successfully" May 13 23:23:44.743218 containerd[1467]: time="2025-05-13T23:23:44.743194428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:1,}" May 13 23:23:44.829119 containerd[1467]: time="2025-05-13T23:23:44.829071824Z" level=error msg="Failed to destroy network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.829279 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 23:23:44.829317 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 23:23:44.830283 containerd[1467]: time="2025-05-13T23:23:44.829743458Z" level=error msg="encountered an error cleaning up failed sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.830283 containerd[1467]: time="2025-05-13T23:23:44.829807990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.830283 containerd[1467]: time="2025-05-13T23:23:44.830078854Z" level=error msg="Failed to destroy network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.830780 kubelet[1782]: E0513 23:23:44.830207 1782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.830843 containerd[1467]: time="2025-05-13T23:23:44.830708348Z" level=error msg="encountered an error cleaning up failed sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.830843 containerd[1467]: time="2025-05-13T23:23:44.830753332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.831467 kubelet[1782]: E0513 23:23:44.830936 1782 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 23:23:44.831467 kubelet[1782]: E0513 23:23:44.831009 1782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:44.831467 kubelet[1782]: E0513 23:23:44.831030 1782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tvqd8" May 13 23:23:44.831575 kubelet[1782]: E0513 23:23:44.831090 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tvqd8_calico-system(cdfb60ad-a39c-4082-86ea-908d79fa0d73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tvqd8" podUID="cdfb60ad-a39c-4082-86ea-908d79fa0d73" May 13 23:23:44.831575 kubelet[1782]: E0513 23:23:44.831334 1782 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:44.831575 kubelet[1782]: E0513 23:23:44.831360 1782 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-2wgmf" May 13 23:23:44.831662 kubelet[1782]: E0513 23:23:44.831422 1782 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-2wgmf_default(4edf6823-69a2-46ad-8ef1-8d03b33b7843)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-2wgmf_default(4edf6823-69a2-46ad-8ef1-8d03b33b7843)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-2wgmf" podUID="4edf6823-69a2-46ad-8ef1-8d03b33b7843" May 13 23:23:45.589442 kubelet[1782]: E0513 23:23:45.589395 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:45.742804 kubelet[1782]: I0513 23:23:45.742759 1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a" May 13 23:23:45.743454 containerd[1467]: time="2025-05-13T23:23:45.743252038Z" level=info msg="StopPodSandbox for \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\"" May 13 23:23:45.743454 containerd[1467]: time="2025-05-13T23:23:45.743427696Z" level=info msg="Ensure that sandbox 09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a in task-service has been cleanup successfully" May 13 23:23:45.744180 kubelet[1782]: I0513 23:23:45.744163 1782 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f" May 13 23:23:45.744830 systemd[1]: run-netns-cni\x2db14fea39\x2d8834\x2d984f\x2d443c\x2d285232d2be70.mount: Deactivated successfully. May 13 23:23:45.745403 containerd[1467]: time="2025-05-13T23:23:45.745370711Z" level=info msg="StopPodSandbox for \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\"" May 13 23:23:45.746053 containerd[1467]: time="2025-05-13T23:23:45.745549934Z" level=info msg="Ensure that sandbox c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f in task-service has been cleanup successfully" May 13 23:23:45.746053 containerd[1467]: time="2025-05-13T23:23:45.745640046Z" level=info msg="TearDown network for sandbox \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\" successfully" May 13 23:23:45.746053 containerd[1467]: time="2025-05-13T23:23:45.745664756Z" level=info msg="StopPodSandbox for \"09b89725105d5494f2b3881bfdc861c9e0bc9ec9a05354bca5dd0acda67da91a\" returns successfully" May 13 23:23:45.746053 containerd[1467]: time="2025-05-13T23:23:45.745827439Z" level=info msg="TearDown network for sandbox \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\" successfully" May 13 23:23:45.746053 containerd[1467]: time="2025-05-13T23:23:45.745843579Z" level=info msg="StopPodSandbox for \"c514675cbcbfacb9d05d69df7aa0307603a5af4d979851ec31335ab916887a5f\" returns successfully" May 13 23:23:45.746510 containerd[1467]: time="2025-05-13T23:23:45.746381247Z" level=info msg="StopPodSandbox for \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\"" May 13 23:23:45.746510 containerd[1467]: time="2025-05-13T23:23:45.746473642Z" level=info msg="TearDown network for sandbox \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\" successfully" May 13 23:23:45.746510 containerd[1467]: time="2025-05-13T23:23:45.746484255Z" level=info msg="StopPodSandbox for \"629d1876799725c7a9e0a476dd650aba910d134d4756a0ca18315276fb115b2e\" returns successfully" May 13 23:23:45.746602 containerd[1467]: time="2025-05-13T23:23:45.746543689Z" level=info msg="StopPodSandbox for \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\"" May 13 23:23:45.746648 containerd[1467]: time="2025-05-13T23:23:45.746627793Z" level=info msg="TearDown network for sandbox \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\" successfully" May 13 23:23:45.746648 containerd[1467]: time="2025-05-13T23:23:45.746643493Z" level=info msg="StopPodSandbox for \"830168203b1d2a665edbd3deff8794b5dbfbb4475645f13309d0b3b2c2a610dd\" returns successfully" May 13 23:23:45.746927 containerd[1467]: time="2025-05-13T23:23:45.746897288Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\"" May 13 23:23:45.746983 containerd[1467]: time="2025-05-13T23:23:45.746966494Z" level=info msg="TearDown network for sandbox \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" successfully" May 13 23:23:45.746983 containerd[1467]: time="2025-05-13T23:23:45.746975425Z" level=info msg="StopPodSandbox for \"4c10b6d72a0769276ff9ac298f2654eae843bd49e96a080146e852bf73b6dd54\" returns successfully" May 13 23:23:45.747809 containerd[1467]: time="2025-05-13T23:23:45.747343403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:3,}" May 13 23:23:45.747809 containerd[1467]: time="2025-05-13T23:23:45.747648061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:2,}" May 13 23:23:45.749013 systemd[1]: run-netns-cni\x2dc0db5aa8\x2d986e\x2da4c8\x2d37c7\x2deb5a1ff9667e.mount: Deactivated successfully. May 13 23:23:45.965202 systemd-networkd[1400]: cali2ae8ece8578: Link UP May 13 23:23:45.965992 systemd-networkd[1400]: cali2ae8ece8578: Gained carrier May 13 23:23:45.972086 kubelet[1782]: I0513 23:23:45.972032 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g9kqs" podStartSLOduration=5.172908623 podStartE2EDuration="13.97201443s" podCreationTimestamp="2025-05-13 23:23:32 +0000 UTC" firstStartedPulling="2025-05-13 23:23:35.808065051 +0000 UTC m=+4.038825482" lastFinishedPulling="2025-05-13 23:23:44.607170858 +0000 UTC m=+12.837931289" observedRunningTime="2025-05-13 23:23:44.749859613 +0000 UTC m=+12.980620084" watchObservedRunningTime="2025-05-13 23:23:45.97201443 +0000 UTC m=+14.202774861" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.786 [INFO][2588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.806 [INFO][2588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0 nginx-deployment-85f456d6dd- default 4edf6823-69a2-46ad-8ef1-8d03b33b7843 991 0 2025-05-13 23:23:43 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.24 nginx-deployment-85f456d6dd-2wgmf eth0 default [] [] [kns.default ksa.default.default] cali2ae8ece8578 [] []}} ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.806 [INFO][2588] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.914 [INFO][2610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" HandleID="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Workload="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" HandleID="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Workload="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e4ae0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.24", "pod":"nginx-deployment-85f456d6dd-2wgmf", "timestamp":"2025-05-13 23:23:45.914943741 +0000 UTC"}, Hostname:"10.0.0.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2610] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.24' May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.932 [INFO][2610] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.936 [INFO][2610] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.940 [INFO][2610] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.942 [INFO][2610] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.944 [INFO][2610] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.944 [INFO][2610] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.946 [INFO][2610] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.949 [INFO][2610] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.954 [INFO][2610] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.129/26] block=192.168.92.128/26 handle="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.954 [INFO][2610] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.129/26] handle="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" host="10.0.0.24" May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.954 [INFO][2610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:23:45.973866 containerd[1467]: 2025-05-13 23:23:45.954 [INFO][2610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.129/26] IPv6=[] ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" HandleID="k8s-pod-network.142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Workload="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.956 [INFO][2588] cni-plugin/k8s.go 386: Populated endpoint ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4edf6823-69a2-46ad-8ef1-8d03b33b7843", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-2wgmf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2ae8ece8578", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.956 [INFO][2588] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.129/32] ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.957 [INFO][2588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ae8ece8578 ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.965 [INFO][2588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.965 [INFO][2588] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"4edf6823-69a2-46ad-8ef1-8d03b33b7843", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f", Pod:"nginx-deployment-85f456d6dd-2wgmf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali2ae8ece8578", MAC:"32:b6:f4:80:1e:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:45.974838 containerd[1467]: 2025-05-13 23:23:45.972 [INFO][2588] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f" Namespace="default" Pod="nginx-deployment-85f456d6dd-2wgmf" WorkloadEndpoint="10.0.0.24-k8s-nginx--deployment--85f456d6dd--2wgmf-eth0" May 13 23:23:45.988367 systemd-networkd[1400]: cali3a44841e132: Link UP May 13 23:23:45.988685 systemd-networkd[1400]: cali3a44841e132: Gained carrier May 13 23:23:45.991493 containerd[1467]: time="2025-05-13T23:23:45.991366801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:45.991493 containerd[1467]: time="2025-05-13T23:23:45.991428919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:45.991493 containerd[1467]: time="2025-05-13T23:23:45.991444298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:45.991887 containerd[1467]: time="2025-05-13T23:23:45.991788926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.786 [INFO][2577] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.806 [INFO][2577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.24-k8s-csi--node--driver--tvqd8-eth0 csi-node-driver- calico-system cdfb60ad-a39c-4082-86ea-908d79fa0d73 737 0 2025-05-13 23:23:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.24 csi-node-driver-tvqd8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3a44841e132 [] []}} ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.806 [INFO][2577] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.914 [INFO][2611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" HandleID="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Workload="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" HandleID="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Workload="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040d2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.24", "pod":"csi-node-driver-tvqd8", "timestamp":"2025-05-13 23:23:45.91494334 +0000 UTC"}, Hostname:"10.0.0.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.930 [INFO][2611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.954 [INFO][2611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.955 [INFO][2611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.24' May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.956 [INFO][2611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.961 [INFO][2611] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.968 [INFO][2611] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.970 [INFO][2611] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.974 [INFO][2611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.974 [INFO][2611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.976 [INFO][2611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.979 [INFO][2611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.984 [INFO][2611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.130/26] block=192.168.92.128/26 handle="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.984 [INFO][2611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.130/26] handle="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" host="10.0.0.24" May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.984 [INFO][2611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:23:45.999506 containerd[1467]: 2025-05-13 23:23:45.984 [INFO][2611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.130/26] IPv6=[] ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" HandleID="k8s-pod-network.301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Workload="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.986 [INFO][2577] cni-plugin/k8s.go 386: Populated endpoint ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-csi--node--driver--tvqd8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdfb60ad-a39c-4082-86ea-908d79fa0d73", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"", Pod:"csi-node-driver-tvqd8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a44841e132", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.986 [INFO][2577] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.130/32] ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.986 [INFO][2577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a44841e132 ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.988 [INFO][2577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.988 [INFO][2577] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-csi--node--driver--tvqd8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cdfb60ad-a39c-4082-86ea-908d79fa0d73", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d", Pod:"csi-node-driver-tvqd8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a44841e132", MAC:"ba:1f:78:aa:02:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:46.000024 containerd[1467]: 2025-05-13 23:23:45.997 [INFO][2577] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d" Namespace="calico-system" Pod="csi-node-driver-tvqd8" WorkloadEndpoint="10.0.0.24-k8s-csi--node--driver--tvqd8-eth0" May 13 23:23:46.012696 systemd[1]: Started cri-containerd-142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f.scope - libcontainer container 142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f. May 13 23:23:46.017866 containerd[1467]: time="2025-05-13T23:23:46.017407526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:46.017866 containerd[1467]: time="2025-05-13T23:23:46.017689593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:46.017866 containerd[1467]: time="2025-05-13T23:23:46.017708773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:46.018392 containerd[1467]: time="2025-05-13T23:23:46.018338699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:46.023563 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:23:46.037347 systemd[1]: Started cri-containerd-301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d.scope - libcontainer container 301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d. May 13 23:23:46.054036 containerd[1467]: time="2025-05-13T23:23:46.053977181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-2wgmf,Uid:4edf6823-69a2-46ad-8ef1-8d03b33b7843,Namespace:default,Attempt:2,} returns sandbox id \"142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f\"" May 13 23:23:46.056341 containerd[1467]: time="2025-05-13T23:23:46.056300908Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 23:23:46.062499 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:23:46.094653 containerd[1467]: time="2025-05-13T23:23:46.094596360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tvqd8,Uid:cdfb60ad-a39c-4082-86ea-908d79fa0d73,Namespace:calico-system,Attempt:3,} returns sandbox id \"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d\"" May 13 23:23:46.200163 kernel: bpftool[2863]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 23:23:46.351814 systemd-networkd[1400]: vxlan.calico: Link UP May 13 23:23:46.351823 systemd-networkd[1400]: vxlan.calico: Gained carrier May 13 23:23:46.589781 kubelet[1782]: E0513 23:23:46.589730 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:47.318251 systemd-networkd[1400]: cali3a44841e132: Gained IPv6LL May 13 23:23:47.590987 kubelet[1782]: E0513 23:23:47.590874 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:47.702275 systemd-networkd[1400]: cali2ae8ece8578: Gained IPv6LL May 13 23:23:47.943167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1084834969.mount: Deactivated successfully. May 13 23:23:48.215302 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL May 13 23:23:48.591355 kubelet[1782]: E0513 23:23:48.591248 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:48.828228 containerd[1467]: time="2025-05-13T23:23:48.828159502Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 23:23:48.832943 containerd[1467]: time="2025-05-13T23:23:48.832886519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:48.837973 containerd[1467]: time="2025-05-13T23:23:48.837869630Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:48.842077 containerd[1467]: time="2025-05-13T23:23:48.841846142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:48.842927 containerd[1467]: time="2025-05-13T23:23:48.842872477Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.786532489s" May 13 23:23:48.842927 containerd[1467]: time="2025-05-13T23:23:48.842916794Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 23:23:48.844219 containerd[1467]: time="2025-05-13T23:23:48.844186572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 23:23:48.845977 containerd[1467]: time="2025-05-13T23:23:48.845946317Z" level=info msg="CreateContainer within sandbox \"142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 23:23:48.857230 containerd[1467]: time="2025-05-13T23:23:48.857129552Z" level=info msg="CreateContainer within sandbox \"142275b6a460244a6e380f660babbf0f2d904c33d40b6a16da441f4088817f2f\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"84546920ab1394514c4433a23322118c7863129767692fd27b51267471d7e735\"" May 13 23:23:48.857695 containerd[1467]: time="2025-05-13T23:23:48.857575043Z" level=info msg="StartContainer for \"84546920ab1394514c4433a23322118c7863129767692fd27b51267471d7e735\"" May 13 23:23:48.933322 systemd[1]: Started cri-containerd-84546920ab1394514c4433a23322118c7863129767692fd27b51267471d7e735.scope - libcontainer container 84546920ab1394514c4433a23322118c7863129767692fd27b51267471d7e735. May 13 23:23:48.959568 containerd[1467]: time="2025-05-13T23:23:48.959487570Z" level=info msg="StartContainer for \"84546920ab1394514c4433a23322118c7863129767692fd27b51267471d7e735\" returns successfully" May 13 23:23:49.007984 kubelet[1782]: I0513 23:23:49.007839 1782 topology_manager.go:215] "Topology Admit Handler" podUID="a045a9de-ba4f-4ef0-ac41-626b703a9d85" podNamespace="calico-apiserver" podName="calico-apiserver-7bbbcdd48f-84dcw" May 13 23:23:49.014770 systemd[1]: Created slice kubepods-besteffort-poda045a9de_ba4f_4ef0_ac41_626b703a9d85.slice - libcontainer container kubepods-besteffort-poda045a9de_ba4f_4ef0_ac41_626b703a9d85.slice. May 13 23:23:49.189091 kubelet[1782]: I0513 23:23:49.188923 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j62w6\" (UniqueName: \"kubernetes.io/projected/a045a9de-ba4f-4ef0-ac41-626b703a9d85-kube-api-access-j62w6\") pod \"calico-apiserver-7bbbcdd48f-84dcw\" (UID: \"a045a9de-ba4f-4ef0-ac41-626b703a9d85\") " pod="calico-apiserver/calico-apiserver-7bbbcdd48f-84dcw" May 13 23:23:49.189091 kubelet[1782]: I0513 23:23:49.188966 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a045a9de-ba4f-4ef0-ac41-626b703a9d85-calico-apiserver-certs\") pod \"calico-apiserver-7bbbcdd48f-84dcw\" (UID: \"a045a9de-ba4f-4ef0-ac41-626b703a9d85\") " pod="calico-apiserver/calico-apiserver-7bbbcdd48f-84dcw" May 13 23:23:49.592273 kubelet[1782]: E0513 23:23:49.592158 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:49.618151 containerd[1467]: time="2025-05-13T23:23:49.618103253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbcdd48f-84dcw,Uid:a045a9de-ba4f-4ef0-ac41-626b703a9d85,Namespace:calico-apiserver,Attempt:0,}" May 13 23:23:49.736052 systemd-networkd[1400]: cali4155838035b: Link UP May 13 23:23:49.736279 systemd-networkd[1400]: cali4155838035b: Gained carrier May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.669 [INFO][3031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0 calico-apiserver-7bbbcdd48f- calico-apiserver a045a9de-ba4f-4ef0-ac41-626b703a9d85 1097 0 2025-05-13 23:23:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bbbcdd48f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.24 calico-apiserver-7bbbcdd48f-84dcw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4155838035b [] []}} ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.670 [INFO][3031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.696 [INFO][3045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" HandleID="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Workload="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.707 [INFO][3045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" HandleID="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Workload="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b5e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.24", "pod":"calico-apiserver-7bbbcdd48f-84dcw", "timestamp":"2025-05-13 23:23:49.696897886 +0000 UTC"}, Hostname:"10.0.0.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.708 [INFO][3045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.708 [INFO][3045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.708 [INFO][3045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.24' May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.709 [INFO][3045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.713 [INFO][3045] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.717 [INFO][3045] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.718 [INFO][3045] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.720 [INFO][3045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.720 [INFO][3045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.722 [INFO][3045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.726 [INFO][3045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.732 [INFO][3045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.131/26] block=192.168.92.128/26 handle="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.732 [INFO][3045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.131/26] handle="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" host="10.0.0.24" May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.732 [INFO][3045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:23:49.747915 containerd[1467]: 2025-05-13 23:23:49.732 [INFO][3045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.131/26] IPv6=[] ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" HandleID="k8s-pod-network.08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Workload="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.734 [INFO][3031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0", GenerateName:"calico-apiserver-7bbbcdd48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a045a9de-ba4f-4ef0-ac41-626b703a9d85", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbcdd48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"", Pod:"calico-apiserver-7bbbcdd48f-84dcw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4155838035b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.734 [INFO][3031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.131/32] ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.734 [INFO][3031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4155838035b ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.736 [INFO][3031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.736 [INFO][3031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0", GenerateName:"calico-apiserver-7bbbcdd48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"a045a9de-ba4f-4ef0-ac41-626b703a9d85", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bbbcdd48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e", Pod:"calico-apiserver-7bbbcdd48f-84dcw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4155838035b", MAC:"6a:32:16:59:c7:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:49.748936 containerd[1467]: 2025-05-13 23:23:49.744 [INFO][3031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e" Namespace="calico-apiserver" Pod="calico-apiserver-7bbbcdd48f-84dcw" WorkloadEndpoint="10.0.0.24-k8s-calico--apiserver--7bbbcdd48f--84dcw-eth0" May 13 23:23:49.767736 containerd[1467]: time="2025-05-13T23:23:49.767606786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:49.767736 containerd[1467]: time="2025-05-13T23:23:49.767660705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:49.767736 containerd[1467]: time="2025-05-13T23:23:49.767671713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:49.767937 containerd[1467]: time="2025-05-13T23:23:49.767748369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:49.790334 systemd[1]: Started cri-containerd-08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e.scope - libcontainer container 08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e. May 13 23:23:49.800897 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:23:49.862255 containerd[1467]: time="2025-05-13T23:23:49.862031131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bbbcdd48f-84dcw,Uid:a045a9de-ba4f-4ef0-ac41-626b703a9d85,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e\"" May 13 23:23:50.023638 containerd[1467]: time="2025-05-13T23:23:50.023583242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:50.025494 containerd[1467]: time="2025-05-13T23:23:50.024108537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 23:23:50.025494 containerd[1467]: time="2025-05-13T23:23:50.024887554Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:50.026871 containerd[1467]: time="2025-05-13T23:23:50.026838078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:50.027458 containerd[1467]: time="2025-05-13T23:23:50.027422811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 1.183204695s" May 13 23:23:50.027490 containerd[1467]: time="2025-05-13T23:23:50.027457633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 23:23:50.028515 containerd[1467]: time="2025-05-13T23:23:50.028325827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 23:23:50.029407 containerd[1467]: time="2025-05-13T23:23:50.029364490Z" level=info msg="CreateContainer within sandbox \"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 23:23:50.066591 containerd[1467]: time="2025-05-13T23:23:50.066493412Z" level=info msg="CreateContainer within sandbox \"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e74e7d6a33689afe0a8bb336aa8c95b3e08f1c7b2daf2243f7a9be694d11779f\"" May 13 23:23:50.067179 containerd[1467]: time="2025-05-13T23:23:50.067121373Z" level=info msg="StartContainer for \"e74e7d6a33689afe0a8bb336aa8c95b3e08f1c7b2daf2243f7a9be694d11779f\"" May 13 23:23:50.097304 systemd[1]: Started cri-containerd-e74e7d6a33689afe0a8bb336aa8c95b3e08f1c7b2daf2243f7a9be694d11779f.scope - libcontainer container e74e7d6a33689afe0a8bb336aa8c95b3e08f1c7b2daf2243f7a9be694d11779f. May 13 23:23:50.126381 containerd[1467]: time="2025-05-13T23:23:50.125890658Z" level=info msg="StartContainer for \"e74e7d6a33689afe0a8bb336aa8c95b3e08f1c7b2daf2243f7a9be694d11779f\" returns successfully" May 13 23:23:50.593273 kubelet[1782]: E0513 23:23:50.593155 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:51.225729 systemd-networkd[1400]: cali4155838035b: Gained IPv6LL May 13 23:23:51.433184 containerd[1467]: time="2025-05-13T23:23:51.432952149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:51.434028 containerd[1467]: time="2025-05-13T23:23:51.433877425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 23:23:51.434785 containerd[1467]: time="2025-05-13T23:23:51.434732502Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:51.436946 containerd[1467]: time="2025-05-13T23:23:51.436902433Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:51.437725 containerd[1467]: time="2025-05-13T23:23:51.437615391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.409257425s" May 13 23:23:51.437725 containerd[1467]: time="2025-05-13T23:23:51.437642927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 23:23:51.439116 containerd[1467]: time="2025-05-13T23:23:51.439084731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 23:23:51.439937 containerd[1467]: time="2025-05-13T23:23:51.439907471Z" level=info msg="CreateContainer within sandbox \"08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 23:23:51.450311 containerd[1467]: time="2025-05-13T23:23:51.450269254Z" level=info msg="CreateContainer within sandbox \"08447b9e1c26989bb24029e82ca1260957b4f02498e6932665d15814b7d7856e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"03010f316968045b952afec1e436ca8423c36645c0f8464a4ce87156db74d250\"" May 13 23:23:51.450902 containerd[1467]: time="2025-05-13T23:23:51.450838052Z" level=info msg="StartContainer for \"03010f316968045b952afec1e436ca8423c36645c0f8464a4ce87156db74d250\"" May 13 23:23:51.478301 systemd[1]: Started cri-containerd-03010f316968045b952afec1e436ca8423c36645c0f8464a4ce87156db74d250.scope - libcontainer container 03010f316968045b952afec1e436ca8423c36645c0f8464a4ce87156db74d250. May 13 23:23:51.511063 containerd[1467]: time="2025-05-13T23:23:51.511019322Z" level=info msg="StartContainer for \"03010f316968045b952afec1e436ca8423c36645c0f8464a4ce87156db74d250\" returns successfully" May 13 23:23:51.593577 kubelet[1782]: E0513 23:23:51.593523 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:51.777884 kubelet[1782]: I0513 23:23:51.777475 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-2wgmf" podStartSLOduration=5.989169445 podStartE2EDuration="8.777456236s" podCreationTimestamp="2025-05-13 23:23:43 +0000 UTC" firstStartedPulling="2025-05-13 23:23:46.055700895 +0000 UTC m=+14.286461326" lastFinishedPulling="2025-05-13 23:23:48.843987686 +0000 UTC m=+17.074748117" observedRunningTime="2025-05-13 23:23:49.773340085 +0000 UTC m=+18.004100516" watchObservedRunningTime="2025-05-13 23:23:51.777456236 +0000 UTC m=+20.008216667" May 13 23:23:51.777884 kubelet[1782]: I0513 23:23:51.777569 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7bbbcdd48f-84dcw" podStartSLOduration=2.202983888 podStartE2EDuration="3.777564416s" podCreationTimestamp="2025-05-13 23:23:48 +0000 UTC" firstStartedPulling="2025-05-13 23:23:49.863974148 +0000 UTC m=+18.094734579" lastFinishedPulling="2025-05-13 23:23:51.438554676 +0000 UTC m=+19.669315107" observedRunningTime="2025-05-13 23:23:51.777456996 +0000 UTC m=+20.008217467" watchObservedRunningTime="2025-05-13 23:23:51.777564416 +0000 UTC m=+20.008324847" May 13 23:23:52.379888 containerd[1467]: time="2025-05-13T23:23:52.379696982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:52.380253 containerd[1467]: time="2025-05-13T23:23:52.380216996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 23:23:52.381232 containerd[1467]: time="2025-05-13T23:23:52.381177625Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:52.383904 containerd[1467]: time="2025-05-13T23:23:52.383572875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:23:52.384975 containerd[1467]: time="2025-05-13T23:23:52.384932979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 945.808385ms" May 13 23:23:52.385023 containerd[1467]: time="2025-05-13T23:23:52.384965795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 23:23:52.386835 containerd[1467]: time="2025-05-13T23:23:52.386782042Z" level=info msg="CreateContainer within sandbox \"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 23:23:52.405522 containerd[1467]: time="2025-05-13T23:23:52.405388650Z" level=info msg="CreateContainer within sandbox \"301dd6f1da94158412a05c137a34c00c98e4b68b0c51a10d09852ba1cd23013d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1c04c755ac072f377db8a9307b0bf8e57fe02dd49dbd38b81edec4e75bcf5581\"" May 13 23:23:52.406068 containerd[1467]: time="2025-05-13T23:23:52.406040568Z" level=info msg="StartContainer for \"1c04c755ac072f377db8a9307b0bf8e57fe02dd49dbd38b81edec4e75bcf5581\"" May 13 23:23:52.434311 systemd[1]: Started cri-containerd-1c04c755ac072f377db8a9307b0bf8e57fe02dd49dbd38b81edec4e75bcf5581.scope - libcontainer container 1c04c755ac072f377db8a9307b0bf8e57fe02dd49dbd38b81edec4e75bcf5581. May 13 23:23:52.480993 containerd[1467]: time="2025-05-13T23:23:52.480905454Z" level=info msg="StartContainer for \"1c04c755ac072f377db8a9307b0bf8e57fe02dd49dbd38b81edec4e75bcf5581\" returns successfully" May 13 23:23:52.582677 kubelet[1782]: E0513 23:23:52.582642 1782 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:52.593927 kubelet[1782]: E0513 23:23:52.593879 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:52.741549 kubelet[1782]: I0513 23:23:52.741449 1782 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 23:23:52.741549 kubelet[1782]: I0513 23:23:52.741484 1782 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 23:23:52.779633 kubelet[1782]: I0513 23:23:52.779589 1782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:23:52.787306 kubelet[1782]: I0513 23:23:52.787249 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-tvqd8" podStartSLOduration=14.499011835 podStartE2EDuration="20.787234872s" podCreationTimestamp="2025-05-13 23:23:32 +0000 UTC" firstStartedPulling="2025-05-13 23:23:46.09738251 +0000 UTC m=+14.328142941" lastFinishedPulling="2025-05-13 23:23:52.385605587 +0000 UTC m=+20.616365978" observedRunningTime="2025-05-13 23:23:52.787197574 +0000 UTC m=+21.017958005" watchObservedRunningTime="2025-05-13 23:23:52.787234872 +0000 UTC m=+21.017995423" May 13 23:23:53.594322 kubelet[1782]: E0513 23:23:53.594285 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:54.594629 kubelet[1782]: E0513 23:23:54.594583 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:55.595392 kubelet[1782]: E0513 23:23:55.595349 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:56.595667 kubelet[1782]: E0513 23:23:56.595613 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:56.866998 kubelet[1782]: I0513 23:23:56.866749 1782 topology_manager.go:215] "Topology Admit Handler" podUID="2edd6321-c902-4ee6-861d-6e1b1fb51c1a" podNamespace="default" podName="nfs-server-provisioner-0" May 13 23:23:56.873373 systemd[1]: Created slice kubepods-besteffort-pod2edd6321_c902_4ee6_861d_6e1b1fb51c1a.slice - libcontainer container kubepods-besteffort-pod2edd6321_c902_4ee6_861d_6e1b1fb51c1a.slice. May 13 23:23:57.031521 kubelet[1782]: I0513 23:23:57.031459 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrlv9\" (UniqueName: \"kubernetes.io/projected/2edd6321-c902-4ee6-861d-6e1b1fb51c1a-kube-api-access-jrlv9\") pod \"nfs-server-provisioner-0\" (UID: \"2edd6321-c902-4ee6-861d-6e1b1fb51c1a\") " pod="default/nfs-server-provisioner-0" May 13 23:23:57.031521 kubelet[1782]: I0513 23:23:57.031514 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2edd6321-c902-4ee6-861d-6e1b1fb51c1a-data\") pod \"nfs-server-provisioner-0\" (UID: \"2edd6321-c902-4ee6-861d-6e1b1fb51c1a\") " pod="default/nfs-server-provisioner-0" May 13 23:23:57.176308 containerd[1467]: time="2025-05-13T23:23:57.176199475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2edd6321-c902-4ee6-861d-6e1b1fb51c1a,Namespace:default,Attempt:0,}" May 13 23:23:57.330591 systemd-networkd[1400]: cali60e51b789ff: Link UP May 13 23:23:57.331544 systemd-networkd[1400]: cali60e51b789ff: Gained carrier May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.242 [INFO][3268] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.24-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 2edd6321-c902-4ee6-861d-6e1b1fb51c1a 1183 0 2025-05-13 23:23:56 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.24 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.242 [INFO][3268] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.273 [INFO][3277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" HandleID="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Workload="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.284 [INFO][3277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" HandleID="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Workload="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005d82d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.24", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-13 23:23:57.27303081 +0000 UTC"}, Hostname:"10.0.0.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.284 [INFO][3277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.284 [INFO][3277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.284 [INFO][3277] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.24' May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.287 [INFO][3277] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.297 [INFO][3277] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.303 [INFO][3277] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.305 [INFO][3277] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.308 [INFO][3277] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.308 [INFO][3277] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.311 [INFO][3277] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2 May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.318 [INFO][3277] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.325 [INFO][3277] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.132/26] block=192.168.92.128/26 handle="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.325 [INFO][3277] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.132/26] handle="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" host="10.0.0.24" May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.325 [INFO][3277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:23:57.360160 containerd[1467]: 2025-05-13 23:23:57.325 [INFO][3277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.132/26] IPv6=[] ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" HandleID="k8s-pod-network.c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Workload="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.361110 containerd[1467]: 2025-05-13 23:23:57.327 [INFO][3268] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2edd6321-c902-4ee6-861d-6e1b1fb51c1a", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:57.361110 containerd[1467]: 2025-05-13 23:23:57.327 [INFO][3268] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.132/32] ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.361110 containerd[1467]: 2025-05-13 23:23:57.327 [INFO][3268] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.361110 containerd[1467]: 2025-05-13 23:23:57.331 [INFO][3268] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.361374 containerd[1467]: 2025-05-13 23:23:57.332 [INFO][3268] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2edd6321-c902-4ee6-861d-6e1b1fb51c1a", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.92.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:ec:e8:de:09:ec", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:23:57.361374 containerd[1467]: 2025-05-13 23:23:57.352 [INFO][3268] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.24-k8s-nfs--server--provisioner--0-eth0" May 13 23:23:57.378514 containerd[1467]: time="2025-05-13T23:23:57.378288182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:23:57.378514 containerd[1467]: time="2025-05-13T23:23:57.378339679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:23:57.378514 containerd[1467]: time="2025-05-13T23:23:57.378350762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:57.378514 containerd[1467]: time="2025-05-13T23:23:57.378429147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:23:57.407335 systemd[1]: Started cri-containerd-c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2.scope - libcontainer container c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2. May 13 23:23:57.418486 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:23:57.438213 containerd[1467]: time="2025-05-13T23:23:57.438101912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2edd6321-c902-4ee6-861d-6e1b1fb51c1a,Namespace:default,Attempt:0,} returns sandbox id \"c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2\"" May 13 23:23:57.440841 containerd[1467]: time="2025-05-13T23:23:57.440557853Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 23:23:57.596264 kubelet[1782]: E0513 23:23:57.596211 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:58.454646 systemd-networkd[1400]: cali60e51b789ff: Gained IPv6LL May 13 23:23:58.596437 kubelet[1782]: E0513 23:23:58.596363 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:59.597325 kubelet[1782]: E0513 23:23:59.597279 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:23:59.637249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035228914.mount: Deactivated successfully. May 13 23:24:00.597727 kubelet[1782]: E0513 23:24:00.597684 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:00.927623 containerd[1467]: time="2025-05-13T23:24:00.927495838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:24:00.928457 containerd[1467]: time="2025-05-13T23:24:00.928389838Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 23:24:00.929109 containerd[1467]: time="2025-05-13T23:24:00.929067900Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:24:00.933753 containerd[1467]: time="2025-05-13T23:24:00.933705025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:24:00.936411 containerd[1467]: time="2025-05-13T23:24:00.936369701Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.495723701s" May 13 23:24:00.936460 containerd[1467]: time="2025-05-13T23:24:00.936413352Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 23:24:00.938569 containerd[1467]: time="2025-05-13T23:24:00.938523119Z" level=info msg="CreateContainer within sandbox \"c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 23:24:00.950695 containerd[1467]: time="2025-05-13T23:24:00.950581436Z" level=info msg="CreateContainer within sandbox \"c74e564cf5376ca3c8407f428db9540d2431b3fe3b2e8d23f7a30f43869332d2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6d6104a862779580c4ea5f1a7161e438eaef991e404bbe60f4e6b362042c7824\"" May 13 23:24:00.951087 containerd[1467]: time="2025-05-13T23:24:00.951062765Z" level=info msg="StartContainer for \"6d6104a862779580c4ea5f1a7161e438eaef991e404bbe60f4e6b362042c7824\"" May 13 23:24:00.992320 systemd[1]: Started cri-containerd-6d6104a862779580c4ea5f1a7161e438eaef991e404bbe60f4e6b362042c7824.scope - libcontainer container 6d6104a862779580c4ea5f1a7161e438eaef991e404bbe60f4e6b362042c7824. May 13 23:24:01.024215 containerd[1467]: time="2025-05-13T23:24:01.024161937Z" level=info msg="StartContainer for \"6d6104a862779580c4ea5f1a7161e438eaef991e404bbe60f4e6b362042c7824\" returns successfully" May 13 23:24:01.598426 kubelet[1782]: E0513 23:24:01.598372 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:02.599051 kubelet[1782]: E0513 23:24:02.599012 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:03.599920 kubelet[1782]: E0513 23:24:03.599877 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:04.600879 kubelet[1782]: E0513 23:24:04.600832 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:05.601392 kubelet[1782]: E0513 23:24:05.601351 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:06.602514 kubelet[1782]: E0513 23:24:06.602465 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:07.602806 kubelet[1782]: E0513 23:24:07.602764 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:08.058085 update_engine[1453]: I20250513 23:24:08.058012 1453 update_attempter.cc:509] Updating boot flags... May 13 23:24:08.097203 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3463) May 13 23:24:08.143244 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3463) May 13 23:24:08.189254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3463) May 13 23:24:08.603264 kubelet[1782]: E0513 23:24:08.603220 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:09.604181 kubelet[1782]: E0513 23:24:09.604123 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:10.002010 kubelet[1782]: I0513 23:24:10.001501 1782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 23:24:10.038089 kubelet[1782]: I0513 23:24:10.038040 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.541079335 podStartE2EDuration="14.038023432s" podCreationTimestamp="2025-05-13 23:23:56 +0000 UTC" firstStartedPulling="2025-05-13 23:23:57.440055293 +0000 UTC m=+25.670815724" lastFinishedPulling="2025-05-13 23:24:00.93699939 +0000 UTC m=+29.167759821" observedRunningTime="2025-05-13 23:24:01.808539871 +0000 UTC m=+30.039300302" watchObservedRunningTime="2025-05-13 23:24:10.038023432 +0000 UTC m=+38.268783823" May 13 23:24:10.604802 kubelet[1782]: E0513 23:24:10.604758 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:10.930032 kubelet[1782]: I0513 23:24:10.929435 1782 topology_manager.go:215] "Topology Admit Handler" podUID="41e758c2-5d84-471d-8a04-44316f6d9a8b" podNamespace="default" podName="test-pod-1" May 13 23:24:10.934205 systemd[1]: Created slice kubepods-besteffort-pod41e758c2_5d84_471d_8a04_44316f6d9a8b.slice - libcontainer container kubepods-besteffort-pod41e758c2_5d84_471d_8a04_44316f6d9a8b.slice. May 13 23:24:11.114803 kubelet[1782]: I0513 23:24:11.114758 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df5bad11-afd7-4905-9e50-39b106f1fe33\" (UniqueName: \"kubernetes.io/nfs/41e758c2-5d84-471d-8a04-44316f6d9a8b-pvc-df5bad11-afd7-4905-9e50-39b106f1fe33\") pod \"test-pod-1\" (UID: \"41e758c2-5d84-471d-8a04-44316f6d9a8b\") " pod="default/test-pod-1" May 13 23:24:11.115007 kubelet[1782]: I0513 23:24:11.114989 1782 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m47w2\" (UniqueName: \"kubernetes.io/projected/41e758c2-5d84-471d-8a04-44316f6d9a8b-kube-api-access-m47w2\") pod \"test-pod-1\" (UID: \"41e758c2-5d84-471d-8a04-44316f6d9a8b\") " pod="default/test-pod-1" May 13 23:24:11.249331 kernel: FS-Cache: Loaded May 13 23:24:11.281556 kernel: RPC: Registered named UNIX socket transport module. May 13 23:24:11.281664 kernel: RPC: Registered udp transport module. May 13 23:24:11.281682 kernel: RPC: Registered tcp transport module. May 13 23:24:11.281697 kernel: RPC: Registered tcp-with-tls transport module. May 13 23:24:11.281712 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 23:24:11.444508 kernel: NFS: Registering the id_resolver key type May 13 23:24:11.444607 kernel: Key type id_resolver registered May 13 23:24:11.444647 kernel: Key type id_legacy registered May 13 23:24:11.473545 nfsidmap[3493]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 23:24:11.475213 nfsidmap[3494]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 23:24:11.536471 containerd[1467]: time="2025-05-13T23:24:11.536357341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41e758c2-5d84-471d-8a04-44316f6d9a8b,Namespace:default,Attempt:0,}" May 13 23:24:11.605348 kubelet[1782]: E0513 23:24:11.605295 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:11.698485 systemd-networkd[1400]: cali5ec59c6bf6e: Link UP May 13 23:24:11.699199 systemd-networkd[1400]: cali5ec59c6bf6e: Gained carrier May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.581 [INFO][3495] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.24-k8s-test--pod--1-eth0 default 41e758c2-5d84-471d-8a04-44316f6d9a8b 1275 0 2025-05-13 23:23:57 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.24 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.582 [INFO][3495] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.627 [INFO][3510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" HandleID="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Workload="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.642 [INFO][3510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" HandleID="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Workload="10.0.0.24-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000132680), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.24", "pod":"test-pod-1", "timestamp":"2025-05-13 23:24:11.627850279 +0000 UTC"}, Hostname:"10.0.0.24", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.642 [INFO][3510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.642 [INFO][3510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.642 [INFO][3510] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.24' May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.644 [INFO][3510] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.677 [INFO][3510] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.681 [INFO][3510] ipam/ipam.go 489: Trying affinity for 192.168.92.128/26 host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.683 [INFO][3510] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.685 [INFO][3510] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.128/26 host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.685 [INFO][3510] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.128/26 handle="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.686 [INFO][3510] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.690 [INFO][3510] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.128/26 handle="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.695 [INFO][3510] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.133/26] block=192.168.92.128/26 handle="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.695 [INFO][3510] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.133/26] handle="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" host="10.0.0.24" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.695 [INFO][3510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.695 [INFO][3510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.133/26] IPv6=[] ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" HandleID="k8s-pod-network.4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Workload="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712126 containerd[1467]: 2025-05-13 23:24:11.696 [INFO][3495] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"41e758c2-5d84-471d-8a04-44316f6d9a8b", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:24:11.712900 containerd[1467]: 2025-05-13 23:24:11.696 [INFO][3495] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.133/32] ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712900 containerd[1467]: 2025-05-13 23:24:11.696 [INFO][3495] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712900 containerd[1467]: 2025-05-13 23:24:11.698 [INFO][3495] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.712900 containerd[1467]: 2025-05-13 23:24:11.699 [INFO][3495] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.24-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"41e758c2-5d84-471d-8a04-44316f6d9a8b", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 23, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.24", ContainerID:"4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.92.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ee:72:10:69:a2:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 23:24:11.712900 containerd[1467]: 2025-05-13 23:24:11.709 [INFO][3495] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.24-k8s-test--pod--1-eth0" May 13 23:24:11.735323 containerd[1467]: time="2025-05-13T23:24:11.735200291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 23:24:11.735323 containerd[1467]: time="2025-05-13T23:24:11.735260220Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 23:24:11.735706 containerd[1467]: time="2025-05-13T23:24:11.735303587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:24:11.736115 containerd[1467]: time="2025-05-13T23:24:11.736072623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 23:24:11.758293 systemd[1]: Started cri-containerd-4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d.scope - libcontainer container 4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d. May 13 23:24:11.768904 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 23:24:11.784882 containerd[1467]: time="2025-05-13T23:24:11.784839548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41e758c2-5d84-471d-8a04-44316f6d9a8b,Namespace:default,Attempt:0,} returns sandbox id \"4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d\"" May 13 23:24:11.786380 containerd[1467]: time="2025-05-13T23:24:11.786347256Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 23:24:12.055909 containerd[1467]: time="2025-05-13T23:24:12.055842830Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:24:12.056394 containerd[1467]: time="2025-05-13T23:24:12.056336901Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 23:24:12.059601 containerd[1467]: time="2025-05-13T23:24:12.059553804Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 273.169062ms" May 13 23:24:12.059601 containerd[1467]: time="2025-05-13T23:24:12.059587529Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 23:24:12.061578 containerd[1467]: time="2025-05-13T23:24:12.061539810Z" level=info msg="CreateContainer within sandbox \"4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 23:24:12.077621 containerd[1467]: time="2025-05-13T23:24:12.077574438Z" level=info msg="CreateContainer within sandbox \"4cc2a52a1eb97799cdf74f26b4fbf19a51f8b73d7f73e6e8871ba6b679ba2b3d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7308959a38883bf726d90ac71cccdfb111516d4de8ddaf969d1c2a0af30bff3b\"" May 13 23:24:12.082741 containerd[1467]: time="2025-05-13T23:24:12.082679172Z" level=info msg="StartContainer for \"7308959a38883bf726d90ac71cccdfb111516d4de8ddaf969d1c2a0af30bff3b\"" May 13 23:24:12.110365 systemd[1]: Started cri-containerd-7308959a38883bf726d90ac71cccdfb111516d4de8ddaf969d1c2a0af30bff3b.scope - libcontainer container 7308959a38883bf726d90ac71cccdfb111516d4de8ddaf969d1c2a0af30bff3b. May 13 23:24:12.131883 containerd[1467]: time="2025-05-13T23:24:12.131832807Z" level=info msg="StartContainer for \"7308959a38883bf726d90ac71cccdfb111516d4de8ddaf969d1c2a0af30bff3b\" returns successfully" May 13 23:24:12.582028 kubelet[1782]: E0513 23:24:12.581973 1782 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:12.606301 kubelet[1782]: E0513 23:24:12.606263 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:12.827220 kubelet[1782]: I0513 23:24:12.827155 1782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.552879175 podStartE2EDuration="15.827113514s" podCreationTimestamp="2025-05-13 23:23:57 +0000 UTC" firstStartedPulling="2025-05-13 23:24:11.785939874 +0000 UTC m=+40.016700305" lastFinishedPulling="2025-05-13 23:24:12.060174213 +0000 UTC m=+40.290934644" observedRunningTime="2025-05-13 23:24:12.826269913 +0000 UTC m=+41.057030384" watchObservedRunningTime="2025-05-13 23:24:12.827113514 +0000 UTC m=+41.057873945" May 13 23:24:13.366564 systemd-networkd[1400]: cali5ec59c6bf6e: Gained IPv6LL May 13 23:24:13.607437 kubelet[1782]: E0513 23:24:13.607367 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 23:24:14.608468 kubelet[1782]: E0513 23:24:14.608419 1782 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"