May 13 00:09:14.916553 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:09:14.916574 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:09:14.916583 kernel: KASLR enabled May 13 00:09:14.916589 kernel: efi: EFI v2.7 by EDK II May 13 00:09:14.916595 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:09:14.916601 kernel: random: crng init done May 13 00:09:14.916608 kernel: ACPI: Early table checksum verification disabled May 13 00:09:14.916614 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:09:14.916620 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:09:14.916627 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916634 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916640 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916646 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916652 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916659 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916667 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916674 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916680 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:09:14.916687 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:09:14.916693 kernel: NUMA: Failed to initialise from firmware May 13 00:09:14.916699 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:09:14.916706 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 13 00:09:14.916712 kernel: Zone ranges: May 13 00:09:14.916718 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:09:14.916725 kernel: DMA32 empty May 13 00:09:14.916732 kernel: Normal empty May 13 00:09:14.916738 kernel: Movable zone start for each node May 13 00:09:14.916744 kernel: Early memory node ranges May 13 00:09:14.916751 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:09:14.916757 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:09:14.916763 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:09:14.916770 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:09:14.916776 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:09:14.916782 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:09:14.916789 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:09:14.916795 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:09:14.916802 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:09:14.916809 kernel: psci: probing for conduit method from ACPI. May 13 00:09:14.916815 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:09:14.916822 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:09:14.916831 kernel: psci: Trusted OS migration not required May 13 00:09:14.916838 kernel: psci: SMC Calling Convention v1.1 May 13 00:09:14.916845 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:09:14.916853 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:09:14.916860 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:09:14.916867 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:09:14.916873 kernel: Detected PIPT I-cache on CPU0 May 13 00:09:14.916880 kernel: CPU features: detected: GIC system register CPU interface May 13 00:09:14.916887 kernel: CPU features: detected: Hardware dirty bit management May 13 00:09:14.916893 kernel: CPU features: detected: Spectre-v4 May 13 00:09:14.916900 kernel: CPU features: detected: Spectre-BHB May 13 00:09:14.916906 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:09:14.916913 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:09:14.916921 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:09:14.916928 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:09:14.916935 kernel: alternatives: applying boot alternatives May 13 00:09:14.916943 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:09:14.916950 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:09:14.916957 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:09:14.916963 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:09:14.916970 kernel: Fallback order for Node 0: 0 May 13 00:09:14.916990 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:09:14.916997 kernel: Policy zone: DMA May 13 00:09:14.917004 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:09:14.917013 kernel: software IO TLB: area num 4. May 13 00:09:14.917020 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:09:14.917027 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) May 13 00:09:14.917034 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:09:14.917040 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:09:14.917048 kernel: rcu: RCU event tracing is enabled. May 13 00:09:14.917055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:09:14.917062 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:09:14.917068 kernel: Tracing variant of Tasks RCU enabled. May 13 00:09:14.917075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:09:14.917082 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:09:14.917089 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:09:14.917097 kernel: GICv3: 256 SPIs implemented May 13 00:09:14.917104 kernel: GICv3: 0 Extended SPIs implemented May 13 00:09:14.917110 kernel: Root IRQ handler: gic_handle_irq May 13 00:09:14.917117 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:09:14.917124 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:09:14.917131 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:09:14.917137 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:09:14.917144 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:09:14.917151 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:09:14.917158 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:09:14.917165 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:09:14.917173 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:09:14.917188 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:09:14.917196 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:09:14.917203 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:09:14.917210 kernel: arm-pv: using stolen time PV May 13 00:09:14.917217 kernel: Console: colour dummy device 80x25 May 13 00:09:14.917224 kernel: ACPI: Core revision 20230628 May 13 00:09:14.917231 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:09:14.917238 kernel: pid_max: default: 32768 minimum: 301 May 13 00:09:14.917245 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:09:14.917259 kernel: landlock: Up and running. May 13 00:09:14.917266 kernel: SELinux: Initializing. May 13 00:09:14.917273 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:09:14.917280 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:09:14.917287 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:09:14.917295 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:09:14.917302 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:09:14.917308 kernel: rcu: Hierarchical SRCU implementation. May 13 00:09:14.917316 kernel: rcu: Max phase no-delay instances is 400. May 13 00:09:14.917325 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:09:14.917332 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:09:14.917339 kernel: Remapping and enabling EFI services. May 13 00:09:14.917346 kernel: smp: Bringing up secondary CPUs ... May 13 00:09:14.917352 kernel: Detected PIPT I-cache on CPU1 May 13 00:09:14.917359 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:09:14.917366 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:09:14.917373 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:09:14.917380 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:09:14.917388 kernel: Detected PIPT I-cache on CPU2 May 13 00:09:14.917395 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:09:14.917402 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:09:14.917414 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:09:14.917423 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:09:14.917430 kernel: Detected PIPT I-cache on CPU3 May 13 00:09:14.917437 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:09:14.917445 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:09:14.917452 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:09:14.917459 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:09:14.917466 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:09:14.917475 kernel: SMP: Total of 4 processors activated. May 13 00:09:14.917482 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:09:14.917490 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:09:14.917497 kernel: CPU features: detected: Common not Private translations May 13 00:09:14.917504 kernel: CPU features: detected: CRC32 instructions May 13 00:09:14.917512 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:09:14.917521 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:09:14.917528 kernel: CPU features: detected: LSE atomic instructions May 13 00:09:14.917535 kernel: CPU features: detected: Privileged Access Never May 13 00:09:14.917542 kernel: CPU features: detected: RAS Extension Support May 13 00:09:14.917550 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:09:14.917557 kernel: CPU: All CPU(s) started at EL1 May 13 00:09:14.917564 kernel: alternatives: applying system-wide alternatives May 13 00:09:14.917571 kernel: devtmpfs: initialized May 13 00:09:14.917579 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:09:14.917586 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:09:14.917595 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:09:14.917602 kernel: SMBIOS 3.0.0 present. May 13 00:09:14.917609 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:09:14.917617 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:09:14.917624 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:09:14.917631 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:09:14.917639 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:09:14.917646 kernel: audit: initializing netlink subsys (disabled) May 13 00:09:14.917655 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:09:14.917662 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:09:14.917669 kernel: cpuidle: using governor menu May 13 00:09:14.917676 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:09:14.917683 kernel: ASID allocator initialised with 32768 entries May 13 00:09:14.917691 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:09:14.917698 kernel: Serial: AMBA PL011 UART driver May 13 00:09:14.917705 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:09:14.917712 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:09:14.917721 kernel: Modules: 509008 pages in range for PLT usage May 13 00:09:14.917728 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:09:14.917735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:09:14.917743 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:09:14.917750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:09:14.917757 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:09:14.917764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:09:14.917771 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:09:14.917779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:09:14.917786 kernel: ACPI: Added _OSI(Module Device) May 13 00:09:14.917794 kernel: ACPI: Added _OSI(Processor Device) May 13 00:09:14.917801 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:09:14.917809 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:09:14.917816 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:09:14.917823 kernel: ACPI: Interpreter enabled May 13 00:09:14.917830 kernel: ACPI: Using GIC for interrupt routing May 13 00:09:14.917837 kernel: ACPI: MCFG table detected, 1 entries May 13 00:09:14.917844 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:09:14.917851 kernel: printk: console [ttyAMA0] enabled May 13 00:09:14.917860 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:09:14.917993 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:09:14.918068 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:09:14.918135 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:09:14.918241 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:09:14.918317 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:09:14.918328 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:09:14.918339 kernel: PCI host bridge to bus 0000:00 May 13 00:09:14.918409 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:09:14.918467 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:09:14.918527 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:09:14.918601 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:09:14.918688 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:09:14.918767 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:09:14.918838 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:09:14.918905 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:09:14.918970 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:09:14.919036 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:09:14.919102 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:09:14.919169 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:09:14.919246 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:09:14.919318 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:09:14.919382 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:09:14.919392 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:09:14.919400 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:09:14.919407 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:09:14.919414 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:09:14.919421 kernel: iommu: Default domain type: Translated May 13 00:09:14.919431 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:09:14.919439 kernel: efivars: Registered efivars operations May 13 00:09:14.919446 kernel: vgaarb: loaded May 13 00:09:14.919453 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:09:14.919460 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:09:14.919467 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:09:14.919475 kernel: pnp: PnP ACPI init May 13 00:09:14.919549 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:09:14.919560 kernel: pnp: PnP ACPI: found 1 devices May 13 00:09:14.919569 kernel: NET: Registered PF_INET protocol family May 13 00:09:14.919576 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:09:14.919584 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:09:14.919591 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:09:14.919599 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:09:14.919606 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:09:14.919613 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:09:14.919620 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:09:14.919629 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:09:14.919636 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:09:14.919644 kernel: PCI: CLS 0 bytes, default 64 May 13 00:09:14.919651 kernel: kvm [1]: HYP mode not available May 13 00:09:14.919658 kernel: Initialise system trusted keyrings May 13 00:09:14.919665 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:09:14.919672 kernel: Key type asymmetric registered May 13 00:09:14.919679 kernel: Asymmetric key parser 'x509' registered May 13 00:09:14.919687 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:09:14.919694 kernel: io scheduler mq-deadline registered May 13 00:09:14.919703 kernel: io scheduler kyber registered May 13 00:09:14.919710 kernel: io scheduler bfq registered May 13 00:09:14.919717 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:09:14.919725 kernel: ACPI: button: Power Button [PWRB] May 13 00:09:14.919732 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:09:14.919799 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:09:14.919809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:09:14.919817 kernel: thunder_xcv, ver 1.0 May 13 00:09:14.919824 kernel: thunder_bgx, ver 1.0 May 13 00:09:14.919833 kernel: nicpf, ver 1.0 May 13 00:09:14.919840 kernel: nicvf, ver 1.0 May 13 00:09:14.919928 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:09:14.919992 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:09:14 UTC (1747094954) May 13 00:09:14.920002 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:09:14.920010 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:09:14.920017 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:09:14.920024 kernel: watchdog: Hard watchdog permanently disabled May 13 00:09:14.920034 kernel: NET: Registered PF_INET6 protocol family May 13 00:09:14.920041 kernel: Segment Routing with IPv6 May 13 00:09:14.920048 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:09:14.920055 kernel: NET: Registered PF_PACKET protocol family May 13 00:09:14.920063 kernel: Key type dns_resolver registered May 13 00:09:14.920070 kernel: registered taskstats version 1 May 13 00:09:14.920077 kernel: Loading compiled-in X.509 certificates May 13 00:09:14.920084 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:09:14.920092 kernel: Key type .fscrypt registered May 13 00:09:14.920100 kernel: Key type fscrypt-provisioning registered May 13 00:09:14.920107 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:09:14.920115 kernel: ima: Allocated hash algorithm: sha1 May 13 00:09:14.920122 kernel: ima: No architecture policies found May 13 00:09:14.920129 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:09:14.920136 kernel: clk: Disabling unused clocks May 13 00:09:14.920144 kernel: Freeing unused kernel memory: 39424K May 13 00:09:14.920151 kernel: Run /init as init process May 13 00:09:14.920158 kernel: with arguments: May 13 00:09:14.920166 kernel: /init May 13 00:09:14.920173 kernel: with environment: May 13 00:09:14.920190 kernel: HOME=/ May 13 00:09:14.920199 kernel: TERM=linux May 13 00:09:14.920206 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:09:14.920216 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:09:14.920226 systemd[1]: Detected virtualization kvm. May 13 00:09:14.920240 systemd[1]: Detected architecture arm64. May 13 00:09:14.920248 systemd[1]: Running in initrd. May 13 00:09:14.920263 systemd[1]: No hostname configured, using default hostname. May 13 00:09:14.920271 systemd[1]: Hostname set to . May 13 00:09:14.920279 systemd[1]: Initializing machine ID from VM UUID. May 13 00:09:14.920286 systemd[1]: Queued start job for default target initrd.target. May 13 00:09:14.920295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:09:14.920303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:09:14.920313 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:09:14.920321 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:09:14.920329 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:09:14.920337 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:09:14.920346 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:09:14.920355 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:09:14.920363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:09:14.920372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:09:14.920380 systemd[1]: Reached target paths.target - Path Units. May 13 00:09:14.920388 systemd[1]: Reached target slices.target - Slice Units. May 13 00:09:14.920396 systemd[1]: Reached target swap.target - Swaps. May 13 00:09:14.920404 systemd[1]: Reached target timers.target - Timer Units. May 13 00:09:14.920412 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:09:14.920420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:09:14.920428 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:09:14.920436 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:09:14.920446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:09:14.920454 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:09:14.920462 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:09:14.920469 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:09:14.920477 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:09:14.920485 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:09:14.920493 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:09:14.920501 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:09:14.920511 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:09:14.920519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:09:14.920526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:09:14.920535 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:09:14.920543 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:09:14.920550 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:09:14.920560 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:09:14.920569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:09:14.920595 systemd-journald[237]: Collecting audit messages is disabled. May 13 00:09:14.920616 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:09:14.920624 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:09:14.920633 systemd-journald[237]: Journal started May 13 00:09:14.920651 systemd-journald[237]: Runtime Journal (/run/log/journal/52ed7edda6114e8c8ed9509a7c90e900) is 5.9M, max 47.3M, 41.4M free. May 13 00:09:14.912041 systemd-modules-load[238]: Inserted module 'overlay' May 13 00:09:14.926228 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:09:14.927668 systemd-modules-load[238]: Inserted module 'br_netfilter' May 13 00:09:14.928563 kernel: Bridge firewalling registered May 13 00:09:14.932961 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:09:14.933409 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:09:14.937523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:09:14.939086 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:09:14.942377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:09:14.952554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:09:14.954592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:09:14.958542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:09:14.959990 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:09:14.971374 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:09:14.973779 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:09:14.981609 dracut-cmdline[276]: dracut-dracut-053 May 13 00:09:14.984081 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:09:15.004221 systemd-resolved[278]: Positive Trust Anchors: May 13 00:09:15.004239 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:09:15.004281 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:09:15.009368 systemd-resolved[278]: Defaulting to hostname 'linux'. May 13 00:09:15.010379 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:09:15.015076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:09:15.058218 kernel: SCSI subsystem initialized May 13 00:09:15.063199 kernel: Loading iSCSI transport class v2.0-870. May 13 00:09:15.070208 kernel: iscsi: registered transport (tcp) May 13 00:09:15.083875 kernel: iscsi: registered transport (qla4xxx) May 13 00:09:15.083934 kernel: QLogic iSCSI HBA Driver May 13 00:09:15.128275 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:09:15.137326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:09:15.155767 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:09:15.155825 kernel: device-mapper: uevent: version 1.0.3 May 13 00:09:15.156932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:09:15.205218 kernel: raid6: neonx8 gen() 15780 MB/s May 13 00:09:15.222206 kernel: raid6: neonx4 gen() 15643 MB/s May 13 00:09:15.239204 kernel: raid6: neonx2 gen() 13227 MB/s May 13 00:09:15.256207 kernel: raid6: neonx1 gen() 10485 MB/s May 13 00:09:15.273206 kernel: raid6: int64x8 gen() 6953 MB/s May 13 00:09:15.290206 kernel: raid6: int64x4 gen() 7333 MB/s May 13 00:09:15.307205 kernel: raid6: int64x2 gen() 6125 MB/s May 13 00:09:15.324308 kernel: raid6: int64x1 gen() 5055 MB/s May 13 00:09:15.324326 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s May 13 00:09:15.342330 kernel: raid6: .... xor() 11929 MB/s, rmw enabled May 13 00:09:15.342345 kernel: raid6: using neon recovery algorithm May 13 00:09:15.347204 kernel: xor: measuring software checksum speed May 13 00:09:15.348422 kernel: 8regs : 17526 MB/sec May 13 00:09:15.348437 kernel: 32regs : 19603 MB/sec May 13 00:09:15.349726 kernel: arm64_neon : 26998 MB/sec May 13 00:09:15.349739 kernel: xor: using function: arm64_neon (26998 MB/sec) May 13 00:09:15.401213 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:09:15.412176 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:09:15.422352 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:09:15.434536 systemd-udevd[461]: Using default interface naming scheme 'v255'. May 13 00:09:15.437743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:09:15.440996 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:09:15.456868 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation May 13 00:09:15.486223 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:09:15.497403 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:09:15.536754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:09:15.550598 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:09:15.564259 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:09:15.565482 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:09:15.568967 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:09:15.570089 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:09:15.576797 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:09:15.576971 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:09:15.585516 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:09:15.588425 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:09:15.588532 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:09:15.598925 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:09:15.598946 kernel: GPT:9289727 != 19775487 May 13 00:09:15.598962 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:09:15.598972 kernel: GPT:9289727 != 19775487 May 13 00:09:15.598982 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:09:15.599537 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:09:15.603205 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:09:15.601219 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:09:15.601396 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:09:15.603890 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:09:15.613433 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:09:15.615278 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:09:15.625243 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (511) May 13 00:09:15.625292 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (517) May 13 00:09:15.626667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:09:15.637963 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:09:15.645325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:09:15.649361 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:09:15.650601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:09:15.656200 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:09:15.668358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:09:15.670262 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:09:15.675146 disk-uuid[551]: Primary Header is updated. May 13 00:09:15.675146 disk-uuid[551]: Secondary Entries is updated. May 13 00:09:15.675146 disk-uuid[551]: Secondary Header is updated. May 13 00:09:15.679231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:09:15.692982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:09:15.691797 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:09:16.694207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:09:16.694333 disk-uuid[553]: The operation has completed successfully. May 13 00:09:16.717891 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:09:16.717991 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:09:16.747403 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:09:16.750603 sh[575]: Success May 13 00:09:16.765236 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:09:16.795906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:09:16.811719 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:09:16.815273 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:09:16.823846 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:09:16.823895 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:09:16.823906 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:09:16.824928 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:09:16.825713 kernel: BTRFS info (device dm-0): using free space tree May 13 00:09:16.829670 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:09:16.831092 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:09:16.845473 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:09:16.847170 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:09:16.855228 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:09:16.855287 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:09:16.855305 kernel: BTRFS info (device vda6): using free space tree May 13 00:09:16.858239 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:09:16.867843 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:09:16.869813 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:09:16.875312 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:09:16.882385 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:09:16.961525 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:09:16.975381 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:09:16.986194 ignition[667]: Ignition 2.19.0 May 13 00:09:16.986203 ignition[667]: Stage: fetch-offline May 13 00:09:16.986241 ignition[667]: no configs at "/usr/lib/ignition/base.d" May 13 00:09:16.986257 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:16.986417 ignition[667]: parsed url from cmdline: "" May 13 00:09:16.986420 ignition[667]: no config URL provided May 13 00:09:16.986424 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:09:16.986432 ignition[667]: no config at "/usr/lib/ignition/user.ign" May 13 00:09:16.986456 ignition[667]: op(1): [started] loading QEMU firmware config module May 13 00:09:16.986460 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:09:16.996716 ignition[667]: op(1): [finished] loading QEMU firmware config module May 13 00:09:16.996794 systemd-networkd[765]: lo: Link UP May 13 00:09:16.996798 systemd-networkd[765]: lo: Gained carrier May 13 00:09:16.997514 systemd-networkd[765]: Enumeration completed May 13 00:09:16.997879 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:09:16.997948 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:09:16.997952 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:09:16.998813 systemd-networkd[765]: eth0: Link UP May 13 00:09:16.998816 systemd-networkd[765]: eth0: Gained carrier May 13 00:09:16.998823 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:09:16.999822 systemd[1]: Reached target network.target - Network. May 13 00:09:17.012322 ignition[667]: parsing config with SHA512: 67ded961dcf00f55e5290a2024555559871a884ef2fc76ed60429f900f5889257843c93f905ace5df75fb9aee18a704501ef86cf3551f44ddaba4e992f1ad84d May 13 00:09:17.015479 unknown[667]: fetched base config from "system" May 13 00:09:17.015489 unknown[667]: fetched user config from "qemu" May 13 00:09:17.015778 ignition[667]: fetch-offline: fetch-offline passed May 13 00:09:17.016222 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:09:17.015842 ignition[667]: Ignition finished successfully May 13 00:09:17.019938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:09:17.022165 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:09:17.034416 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:09:17.046118 ignition[772]: Ignition 2.19.0 May 13 00:09:17.046128 ignition[772]: Stage: kargs May 13 00:09:17.046328 ignition[772]: no configs at "/usr/lib/ignition/base.d" May 13 00:09:17.046338 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:17.049818 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:09:17.047005 ignition[772]: kargs: kargs passed May 13 00:09:17.047052 ignition[772]: Ignition finished successfully May 13 00:09:17.052161 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:09:17.066068 ignition[779]: Ignition 2.19.0 May 13 00:09:17.066078 ignition[779]: Stage: disks May 13 00:09:17.066280 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 00:09:17.066290 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:17.068937 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:09:17.066951 ignition[779]: disks: disks passed May 13 00:09:17.070286 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:09:17.067000 ignition[779]: Ignition finished successfully May 13 00:09:17.072013 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:09:17.074004 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:09:17.075473 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:09:17.077296 systemd[1]: Reached target basic.target - Basic System. May 13 00:09:17.090374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:09:17.099508 systemd-resolved[278]: Detected conflict on linux IN A 10.0.0.19 May 13 00:09:17.099523 systemd-resolved[278]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. May 13 00:09:17.102431 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:09:17.106617 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:09:17.108788 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:09:17.154206 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:09:17.154858 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:09:17.156264 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:09:17.168294 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:09:17.170783 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:09:17.171898 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:09:17.171947 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:09:17.171971 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:09:17.178362 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:09:17.180687 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:09:17.186746 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (797) May 13 00:09:17.186789 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:09:17.186800 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:09:17.186810 kernel: BTRFS info (device vda6): using free space tree May 13 00:09:17.188215 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:09:17.189872 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:09:17.231718 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:09:17.236594 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory May 13 00:09:17.241359 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:09:17.245828 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:09:17.320250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:09:17.332305 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:09:17.334000 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:09:17.340194 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:09:17.357094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:09:17.359357 ignition[912]: INFO : Ignition 2.19.0 May 13 00:09:17.359357 ignition[912]: INFO : Stage: mount May 13 00:09:17.361880 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:09:17.361880 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:17.361880 ignition[912]: INFO : mount: mount passed May 13 00:09:17.361880 ignition[912]: INFO : Ignition finished successfully May 13 00:09:17.362379 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:09:17.374395 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:09:17.822763 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:09:17.837391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:09:17.843200 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (925) May 13 00:09:17.845710 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:09:17.845729 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:09:17.845740 kernel: BTRFS info (device vda6): using free space tree May 13 00:09:17.849217 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:09:17.849834 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:09:17.867357 ignition[942]: INFO : Ignition 2.19.0 May 13 00:09:17.867357 ignition[942]: INFO : Stage: files May 13 00:09:17.869126 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:09:17.869126 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:17.869126 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 13 00:09:17.872766 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:09:17.872766 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:09:17.872766 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:09:17.872766 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:09:17.872766 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:09:17.871777 unknown[942]: wrote ssh authorized keys file for user: core May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:09:17.880500 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 13 00:09:18.162163 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 00:09:18.563863 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 13 00:09:18.563863 ignition[942]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 00:09:18.567686 ignition[942]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:09:18.567686 ignition[942]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:09:18.567686 ignition[942]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 00:09:18.567686 ignition[942]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:09:18.591780 ignition[942]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:09:18.595973 ignition[942]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:09:18.597551 ignition[942]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:09:18.597551 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:09:18.597551 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:09:18.597551 ignition[942]: INFO : files: files passed May 13 00:09:18.597551 ignition[942]: INFO : Ignition finished successfully May 13 00:09:18.599214 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:09:18.612396 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:09:18.614465 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:09:18.617024 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:09:18.617147 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:09:18.622972 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:09:18.625460 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:09:18.625460 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:09:18.628630 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:09:18.627856 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:09:18.630384 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:09:18.646434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:09:18.667435 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:09:18.668267 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:09:18.669778 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:09:18.671599 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:09:18.673381 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:09:18.674276 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:09:18.692234 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:09:18.707486 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:09:18.715870 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:09:18.717214 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:09:18.719360 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:09:18.721232 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:09:18.721380 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:09:18.723997 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:09:18.726080 systemd[1]: Stopped target basic.target - Basic System. May 13 00:09:18.727779 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:09:18.729493 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:09:18.731402 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:09:18.733328 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:09:18.735226 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:09:18.737263 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:09:18.739279 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:09:18.741065 systemd[1]: Stopped target swap.target - Swaps. May 13 00:09:18.742616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:09:18.742758 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:09:18.745234 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:09:18.747249 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:09:18.749276 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:09:18.750259 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:09:18.751544 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:09:18.751687 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:09:18.754492 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:09:18.754620 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:09:18.756616 systemd[1]: Stopped target paths.target - Path Units. May 13 00:09:18.758130 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:09:18.760110 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:09:18.761434 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:09:18.762952 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:09:18.764688 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:09:18.764784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:09:18.766854 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:09:18.766941 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:09:18.768521 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:09:18.768642 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:09:18.770388 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:09:18.770494 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:09:18.778391 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:09:18.780215 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:09:18.781048 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:09:18.781205 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:09:18.783232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:09:18.783353 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:09:18.789475 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:09:18.789583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:09:18.793401 ignition[997]: INFO : Ignition 2.19.0 May 13 00:09:18.793401 ignition[997]: INFO : Stage: umount May 13 00:09:18.795090 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:09:18.795090 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:09:18.795090 ignition[997]: INFO : umount: umount passed May 13 00:09:18.795090 ignition[997]: INFO : Ignition finished successfully May 13 00:09:18.795589 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:09:18.796080 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:09:18.796171 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:09:18.797900 systemd[1]: Stopped target network.target - Network. May 13 00:09:18.799331 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:09:18.799417 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:09:18.801305 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:09:18.801362 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:09:18.803331 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:09:18.803381 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:09:18.805062 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:09:18.805118 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:09:18.807160 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:09:18.809355 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:09:18.811214 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:09:18.811335 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:09:18.813373 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:09:18.813472 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:09:18.821870 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:09:18.822011 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:09:18.824275 systemd-networkd[765]: eth0: DHCPv6 lease lost May 13 00:09:18.824315 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:09:18.824376 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:09:18.825910 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:09:18.826041 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:09:18.828298 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:09:18.828350 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:09:18.836300 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:09:18.837197 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:09:18.837283 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:09:18.839559 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:09:18.839608 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:09:18.841409 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:09:18.841460 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:09:18.843700 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:09:18.857833 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:09:18.857949 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:09:18.860161 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:09:18.860351 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:09:18.863627 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:09:18.863685 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:09:18.865494 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:09:18.865530 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:09:18.867268 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:09:18.867325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:09:18.870370 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:09:18.870426 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:09:18.873097 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:09:18.873151 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:09:18.881365 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:09:18.882423 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:09:18.882493 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:09:18.884674 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:09:18.884725 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:09:18.886726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:09:18.886779 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:09:18.889020 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:09:18.889077 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:09:18.891520 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:09:18.892214 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:09:18.894276 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:09:18.896542 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:09:18.907336 systemd[1]: Switching root. May 13 00:09:18.944442 systemd-journald[237]: Journal stopped May 13 00:09:19.638436 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). May 13 00:09:19.638499 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:09:19.638511 kernel: SELinux: policy capability open_perms=1 May 13 00:09:19.638521 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:09:19.638531 kernel: SELinux: policy capability always_check_network=0 May 13 00:09:19.638543 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:09:19.638554 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:09:19.638563 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:09:19.638572 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:09:19.638584 kernel: audit: type=1403 audit(1747094959.045:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:09:19.638595 systemd[1]: Successfully loaded SELinux policy in 32.307ms. May 13 00:09:19.638612 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.927ms. May 13 00:09:19.638624 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:09:19.638636 systemd[1]: Detected virtualization kvm. May 13 00:09:19.638646 systemd[1]: Detected architecture arm64. May 13 00:09:19.638657 systemd[1]: Detected first boot. May 13 00:09:19.638667 systemd[1]: Initializing machine ID from VM UUID. May 13 00:09:19.638680 zram_generator::config[1041]: No configuration found. May 13 00:09:19.638693 systemd[1]: Populated /etc with preset unit settings. May 13 00:09:19.638704 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:09:19.638714 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:09:19.638725 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:09:19.638736 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:09:19.638747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:09:19.638757 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:09:19.638767 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:09:19.638778 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:09:19.638790 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:09:19.638801 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:09:19.638811 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:09:19.638822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:09:19.638833 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:09:19.638844 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:09:19.638854 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:09:19.638865 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:09:19.638877 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:09:19.638889 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:09:19.638902 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:09:19.638912 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:09:19.638923 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:09:19.638933 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:09:19.638944 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:09:19.638954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:09:19.638966 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:09:19.638977 systemd[1]: Reached target slices.target - Slice Units. May 13 00:09:19.638987 systemd[1]: Reached target swap.target - Swaps. May 13 00:09:19.638998 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:09:19.639008 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:09:19.639019 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:09:19.639030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:09:19.639040 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:09:19.639052 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:09:19.639062 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:09:19.639074 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:09:19.639084 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:09:19.639095 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:09:19.639106 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:09:19.639116 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:09:19.639127 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:09:19.639137 systemd[1]: Reached target machines.target - Containers. May 13 00:09:19.639148 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:09:19.639160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:09:19.639171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:09:19.639257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:09:19.639273 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:09:19.639284 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:09:19.639295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:09:19.639305 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:09:19.639316 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:09:19.639327 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:09:19.639340 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:09:19.639350 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:09:19.639360 kernel: fuse: init (API version 7.39) May 13 00:09:19.639371 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:09:19.639381 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:09:19.639391 kernel: ACPI: bus type drm_connector registered May 13 00:09:19.639401 kernel: loop: module loaded May 13 00:09:19.639410 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:09:19.639422 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:09:19.639434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:09:19.639444 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:09:19.639476 systemd-journald[1108]: Collecting audit messages is disabled. May 13 00:09:19.639501 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:09:19.639514 systemd-journald[1108]: Journal started May 13 00:09:19.639536 systemd-journald[1108]: Runtime Journal (/run/log/journal/52ed7edda6114e8c8ed9509a7c90e900) is 5.9M, max 47.3M, 41.4M free. May 13 00:09:19.428664 systemd[1]: Queued start job for default target multi-user.target. May 13 00:09:19.443734 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:09:19.444098 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:09:19.642654 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:09:19.642689 systemd[1]: Stopped verity-setup.service. May 13 00:09:19.646882 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:09:19.647601 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:09:19.648860 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:09:19.650127 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:09:19.651343 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:09:19.652569 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:09:19.653830 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:09:19.656227 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:09:19.657660 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:09:19.659255 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:09:19.659405 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:09:19.660904 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:09:19.661060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:09:19.662501 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:09:19.662638 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:09:19.664010 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:09:19.664157 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:09:19.665835 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:09:19.665988 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:09:19.667388 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:09:19.667523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:09:19.668857 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:09:19.670461 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:09:19.671972 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:09:19.684830 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:09:19.695357 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:09:19.697620 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:09:19.698781 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:09:19.698827 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:09:19.701021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:09:19.703385 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:09:19.705813 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:09:19.707066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:09:19.708974 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:09:19.711380 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:09:19.712630 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:09:19.714008 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:09:19.715285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:09:19.717431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:09:19.720878 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:09:19.725341 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:09:19.728350 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:09:19.731568 systemd-journald[1108]: Time spent on flushing to /var/log/journal/52ed7edda6114e8c8ed9509a7c90e900 is 14.558ms for 844 entries. May 13 00:09:19.731568 systemd-journald[1108]: System Journal (/var/log/journal/52ed7edda6114e8c8ed9509a7c90e900) is 8.0M, max 195.6M, 187.6M free. May 13 00:09:19.756785 systemd-journald[1108]: Received client request to flush runtime journal. May 13 00:09:19.756833 kernel: loop0: detected capacity change from 0 to 189592 May 13 00:09:19.731505 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:09:19.734059 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:09:19.736292 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:09:19.739995 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:09:19.744452 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:09:19.747407 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:09:19.751748 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:09:19.759876 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:09:19.769510 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:09:19.772855 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:09:19.776084 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. May 13 00:09:19.776102 systemd-tmpfiles[1153]: ACLs are not supported, ignoring. May 13 00:09:19.783431 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:09:19.793610 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:09:19.795206 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:09:19.796111 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:09:19.801219 kernel: loop1: detected capacity change from 0 to 114328 May 13 00:09:19.798547 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:09:19.824429 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:09:19.829247 kernel: loop2: detected capacity change from 0 to 114432 May 13 00:09:19.835451 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:09:19.849432 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 13 00:09:19.849452 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. May 13 00:09:19.853783 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:09:19.871225 kernel: loop3: detected capacity change from 0 to 189592 May 13 00:09:19.877209 kernel: loop4: detected capacity change from 0 to 114328 May 13 00:09:19.881253 kernel: loop5: detected capacity change from 0 to 114432 May 13 00:09:19.884208 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:09:19.884618 (sd-merge)[1181]: Merged extensions into '/usr'. May 13 00:09:19.888513 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:09:19.888532 systemd[1]: Reloading... May 13 00:09:19.941209 zram_generator::config[1204]: No configuration found. May 13 00:09:20.011934 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:09:20.046937 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:09:20.082711 systemd[1]: Reloading finished in 193 ms. May 13 00:09:20.116920 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:09:20.118466 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:09:20.127446 systemd[1]: Starting ensure-sysext.service... May 13 00:09:20.129404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:09:20.135583 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... May 13 00:09:20.135601 systemd[1]: Reloading... May 13 00:09:20.148433 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:09:20.149017 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:09:20.149802 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:09:20.150118 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 13 00:09:20.150260 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. May 13 00:09:20.152850 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:09:20.152964 systemd-tmpfiles[1245]: Skipping /boot May 13 00:09:20.160050 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:09:20.160221 systemd-tmpfiles[1245]: Skipping /boot May 13 00:09:20.183224 zram_generator::config[1272]: No configuration found. May 13 00:09:20.271867 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:09:20.308809 systemd[1]: Reloading finished in 172 ms. May 13 00:09:20.326281 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:09:20.338669 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:09:20.347038 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:09:20.349842 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:09:20.352474 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:09:20.356624 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:09:20.363909 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:09:20.366726 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:09:20.370449 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:09:20.372275 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:09:20.376525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:09:20.380491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:09:20.381907 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:09:20.384209 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:09:20.387887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:09:20.388024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:09:20.390171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:09:20.390416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:09:20.402754 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:09:20.404379 systemd-udevd[1314]: Using default interface naming scheme 'v255'. May 13 00:09:20.405027 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:09:20.405295 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:09:20.410771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:09:20.419570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:09:20.421994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:09:20.425168 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:09:20.426547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:09:20.430101 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:09:20.435396 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:09:20.440486 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:09:20.441761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:09:20.444041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:09:20.444237 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:09:20.446047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:09:20.446179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:09:20.448662 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:09:20.448810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:09:20.450715 augenrules[1360]: No rules May 13 00:09:20.450755 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:09:20.454286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:09:20.463490 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:09:20.480243 systemd[1]: Finished ensure-sysext.service. May 13 00:09:20.485679 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:09:20.485969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:09:20.490213 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1346) May 13 00:09:20.493555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:09:20.505489 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:09:20.509377 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:09:20.512255 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:09:20.514478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:09:20.516387 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:09:20.522620 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:09:20.523848 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:09:20.524175 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:09:20.525704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:09:20.527257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:09:20.528699 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:09:20.528855 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:09:20.531454 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:09:20.531760 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:09:20.543333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:09:20.543532 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:09:20.551008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:09:20.561761 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:09:20.563368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:09:20.563434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:09:20.584019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:09:20.592737 systemd-networkd[1389]: lo: Link UP May 13 00:09:20.592746 systemd-networkd[1389]: lo: Gained carrier May 13 00:09:20.593489 systemd-networkd[1389]: Enumeration completed May 13 00:09:20.593613 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:09:20.593954 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:09:20.593962 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:09:20.594602 systemd-networkd[1389]: eth0: Link UP May 13 00:09:20.594611 systemd-networkd[1389]: eth0: Gained carrier May 13 00:09:20.594625 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:09:20.600428 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:09:20.610425 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.19/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:09:20.618958 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:09:20.622348 systemd-timesyncd[1390]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:09:20.622413 systemd-timesyncd[1390]: Initial clock synchronization to Tue 2025-05-13 00:09:20.649794 UTC. May 13 00:09:20.624946 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:09:20.627119 systemd-resolved[1312]: Positive Trust Anchors: May 13 00:09:20.627143 systemd-resolved[1312]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:09:20.627176 systemd-resolved[1312]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:09:20.633318 systemd-resolved[1312]: Defaulting to hostname 'linux'. May 13 00:09:20.638438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:09:20.642035 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:09:20.645629 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:09:20.646950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:09:20.648560 systemd[1]: Reached target network.target - Network. May 13 00:09:20.649519 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:09:20.668988 lvm[1407]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:09:20.684144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:09:20.695883 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:09:20.697434 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:09:20.698572 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:09:20.699717 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:09:20.700945 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:09:20.702425 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:09:20.703600 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:09:20.704815 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:09:20.706215 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:09:20.706261 systemd[1]: Reached target paths.target - Path Units. May 13 00:09:20.707124 systemd[1]: Reached target timers.target - Timer Units. May 13 00:09:20.708904 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:09:20.711494 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:09:20.720272 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:09:20.722742 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:09:20.724477 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:09:20.725763 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:09:20.726729 systemd[1]: Reached target basic.target - Basic System. May 13 00:09:20.727701 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:09:20.727735 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:09:20.728869 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:09:20.732221 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:09:20.731135 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:09:20.736378 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:09:20.740340 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:09:20.741407 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:09:20.743407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:09:20.750432 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:09:20.751702 jq[1418]: false May 13 00:09:20.753062 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:09:20.759036 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:09:20.762784 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:09:20.763346 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:09:20.763732 extend-filesystems[1419]: Found loop3 May 13 00:09:20.764747 extend-filesystems[1419]: Found loop4 May 13 00:09:20.764747 extend-filesystems[1419]: Found loop5 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda May 13 00:09:20.767554 extend-filesystems[1419]: Found vda1 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda2 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda3 May 13 00:09:20.767554 extend-filesystems[1419]: Found usr May 13 00:09:20.767554 extend-filesystems[1419]: Found vda4 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda6 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda7 May 13 00:09:20.767554 extend-filesystems[1419]: Found vda9 May 13 00:09:20.767554 extend-filesystems[1419]: Checking size of /dev/vda9 May 13 00:09:20.766434 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:09:20.771033 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:09:20.783662 dbus-daemon[1417]: [system] SELinux support is enabled May 13 00:09:20.776222 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:09:20.779913 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:09:20.780664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:09:20.780972 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:09:20.781125 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:09:20.782685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:09:20.785161 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:09:20.787780 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:09:20.787867 jq[1433]: true May 13 00:09:20.798333 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:09:20.798394 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:09:20.799865 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:09:20.799895 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:09:20.809531 extend-filesystems[1419]: Resized partition /dev/vda9 May 13 00:09:20.811284 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:09:20.819251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1343) May 13 00:09:20.819343 jq[1439]: true May 13 00:09:20.822213 extend-filesystems[1449]: resize2fs 1.47.1 (20-May-2024) May 13 00:09:20.827218 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:09:20.845017 systemd-logind[1425]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:09:20.847656 systemd-logind[1425]: New seat seat0. May 13 00:09:20.851192 update_engine[1427]: I20250513 00:09:20.849088 1427 main.cc:92] Flatcar Update Engine starting May 13 00:09:20.849467 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:09:20.854474 systemd[1]: Started update-engine.service - Update Engine. May 13 00:09:20.855646 update_engine[1427]: I20250513 00:09:20.855494 1427 update_check_scheduler.cc:74] Next update check in 4m3s May 13 00:09:20.864513 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:09:20.867241 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:09:20.878052 extend-filesystems[1449]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:09:20.878052 extend-filesystems[1449]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:09:20.878052 extend-filesystems[1449]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:09:20.885488 extend-filesystems[1419]: Resized filesystem in /dev/vda9 May 13 00:09:20.879775 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:09:20.879992 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:09:20.903008 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:09:20.910124 bash[1467]: Updated "/home/core/.ssh/authorized_keys" May 13 00:09:20.911299 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:09:20.913872 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:09:21.013485 containerd[1448]: time="2025-05-13T00:09:21.013397319Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:09:21.038301 containerd[1448]: time="2025-05-13T00:09:21.038073214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.039751 containerd[1448]: time="2025-05-13T00:09:21.039699849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:09:21.039751 containerd[1448]: time="2025-05-13T00:09:21.039740401Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:09:21.039797 containerd[1448]: time="2025-05-13T00:09:21.039758753Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:09:21.039958 containerd[1448]: time="2025-05-13T00:09:21.039931979Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:09:21.039982 containerd[1448]: time="2025-05-13T00:09:21.039960349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040037 containerd[1448]: time="2025-05-13T00:09:21.040014725Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:09:21.040037 containerd[1448]: time="2025-05-13T00:09:21.040031755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040275 containerd[1448]: time="2025-05-13T00:09:21.040244211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:09:21.040275 containerd[1448]: time="2025-05-13T00:09:21.040268614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040321 containerd[1448]: time="2025-05-13T00:09:21.040283480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:09:21.040321 containerd[1448]: time="2025-05-13T00:09:21.040294620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040397 containerd[1448]: time="2025-05-13T00:09:21.040380131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040600 containerd[1448]: time="2025-05-13T00:09:21.040574154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:09:21.040702 containerd[1448]: time="2025-05-13T00:09:21.040685070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:09:21.040728 containerd[1448]: time="2025-05-13T00:09:21.040703743Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:09:21.040795 containerd[1448]: time="2025-05-13T00:09:21.040780599Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:09:21.040841 containerd[1448]: time="2025-05-13T00:09:21.040828123Z" level=info msg="metadata content store policy set" policy=shared May 13 00:09:21.043732 containerd[1448]: time="2025-05-13T00:09:21.043696233Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:09:21.043778 containerd[1448]: time="2025-05-13T00:09:21.043751650Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:09:21.043778 containerd[1448]: time="2025-05-13T00:09:21.043769843Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:09:21.043813 containerd[1448]: time="2025-05-13T00:09:21.043789197Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:09:21.043813 containerd[1448]: time="2025-05-13T00:09:21.043803983Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:09:21.043982 containerd[1448]: time="2025-05-13T00:09:21.043949119Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:09:21.044227 containerd[1448]: time="2025-05-13T00:09:21.044207937Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:09:21.044354 containerd[1448]: time="2025-05-13T00:09:21.044329592Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:09:21.044382 containerd[1448]: time="2025-05-13T00:09:21.044356520Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:09:21.044382 containerd[1448]: time="2025-05-13T00:09:21.044370905Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:09:21.044430 containerd[1448]: time="2025-05-13T00:09:21.044387334Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044430 containerd[1448]: time="2025-05-13T00:09:21.044401519Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044430 containerd[1448]: time="2025-05-13T00:09:21.044414422Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044480 containerd[1448]: time="2025-05-13T00:09:21.044432253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044480 containerd[1448]: time="2025-05-13T00:09:21.044447160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044480 containerd[1448]: time="2025-05-13T00:09:21.044460063Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044480 containerd[1448]: time="2025-05-13T00:09:21.044472124Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044547 containerd[1448]: time="2025-05-13T00:09:21.044483905Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:09:21.044547 containerd[1448]: time="2025-05-13T00:09:21.044504421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044547 containerd[1448]: time="2025-05-13T00:09:21.044518566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044547 containerd[1448]: time="2025-05-13T00:09:21.044531749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044547 containerd[1448]: time="2025-05-13T00:09:21.044544091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044555832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044571459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044582880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044595382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044608525Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044638 containerd[1448]: time="2025-05-13T00:09:21.044627118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044640341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044652643Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044663983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044680332Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044702892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044714552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044736 containerd[1448]: time="2025-05-13T00:09:21.044725372Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:09:21.044861 containerd[1448]: time="2025-05-13T00:09:21.044846225Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:09:21.044884 containerd[1448]: time="2025-05-13T00:09:21.044863696Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:09:21.044884 containerd[1448]: time="2025-05-13T00:09:21.044874435Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:09:21.044920 containerd[1448]: time="2025-05-13T00:09:21.044885535Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:09:21.044920 containerd[1448]: time="2025-05-13T00:09:21.044895352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:09:21.044920 containerd[1448]: time="2025-05-13T00:09:21.044907253Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:09:21.044920 containerd[1448]: time="2025-05-13T00:09:21.044916910Z" level=info msg="NRI interface is disabled by configuration." May 13 00:09:21.044989 containerd[1448]: time="2025-05-13T00:09:21.044927970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:09:21.047058 containerd[1448]: time="2025-05-13T00:09:21.046236763Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:09:21.047058 containerd[1448]: time="2025-05-13T00:09:21.046321232Z" level=info msg="Connect containerd service" May 13 00:09:21.047058 containerd[1448]: time="2025-05-13T00:09:21.046359059Z" level=info msg="using legacy CRI server" May 13 00:09:21.047058 containerd[1448]: time="2025-05-13T00:09:21.046367674Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:09:21.047058 containerd[1448]: time="2025-05-13T00:09:21.046475665Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:09:21.047697 containerd[1448]: time="2025-05-13T00:09:21.047666210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:09:21.048025 containerd[1448]: time="2025-05-13T00:09:21.047866083Z" level=info msg="Start subscribing containerd event" May 13 00:09:21.048025 containerd[1448]: time="2025-05-13T00:09:21.048011820Z" level=info msg="Start recovering state" May 13 00:09:21.048091 containerd[1448]: time="2025-05-13T00:09:21.048084108Z" level=info msg="Start event monitor" May 13 00:09:21.048112 containerd[1448]: time="2025-05-13T00:09:21.048102260Z" level=info msg="Start snapshots syncer" May 13 00:09:21.048131 containerd[1448]: time="2025-05-13T00:09:21.048113360Z" level=info msg="Start cni network conf syncer for default" May 13 00:09:21.048131 containerd[1448]: time="2025-05-13T00:09:21.048121815Z" level=info msg="Start streaming server" May 13 00:09:21.048469 containerd[1448]: time="2025-05-13T00:09:21.048446228Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:09:21.048582 containerd[1448]: time="2025-05-13T00:09:21.048566441Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:09:21.048769 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:09:21.050364 containerd[1448]: time="2025-05-13T00:09:21.050334766Z" level=info msg="containerd successfully booted in 0.037932s" May 13 00:09:21.716547 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:09:21.736419 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:09:21.751512 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:09:21.757527 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:09:21.757769 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:09:21.760663 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:09:21.775989 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:09:21.779020 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:09:21.781536 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:09:21.782914 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:09:21.797359 systemd-networkd[1389]: eth0: Gained IPv6LL May 13 00:09:21.800067 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:09:21.801977 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:09:21.814467 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:09:21.817127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:21.819505 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:09:21.836120 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:09:21.836388 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:09:21.838527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:09:21.845206 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:09:22.319321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:22.320963 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:09:22.323833 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:09:22.325354 systemd[1]: Startup finished in 587ms (kernel) + 4.336s (initrd) + 3.316s (userspace) = 8.240s. May 13 00:09:22.729372 kubelet[1523]: E0513 00:09:22.729256 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:09:22.731624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:09:22.731771 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:09:27.943701 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:09:27.944736 systemd[1]: Started sshd@0-10.0.0.19:22-10.0.0.1:36006.service - OpenSSH per-connection server daemon (10.0.0.1:36006). May 13 00:09:28.000238 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 36006 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.001910 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.015435 systemd-logind[1425]: New session 1 of user core. May 13 00:09:28.016426 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:09:28.024451 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:09:28.035212 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:09:28.038481 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:09:28.043491 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:09:28.116511 systemd[1540]: Queued start job for default target default.target. May 13 00:09:28.128108 systemd[1540]: Created slice app.slice - User Application Slice. May 13 00:09:28.128137 systemd[1540]: Reached target paths.target - Paths. May 13 00:09:28.128149 systemd[1540]: Reached target timers.target - Timers. May 13 00:09:28.129398 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:09:28.138937 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:09:28.138995 systemd[1540]: Reached target sockets.target - Sockets. May 13 00:09:28.139007 systemd[1540]: Reached target basic.target - Basic System. May 13 00:09:28.139042 systemd[1540]: Reached target default.target - Main User Target. May 13 00:09:28.139067 systemd[1540]: Startup finished in 90ms. May 13 00:09:28.139347 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:09:28.140930 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:09:28.204137 systemd[1]: Started sshd@1-10.0.0.19:22-10.0.0.1:36018.service - OpenSSH per-connection server daemon (10.0.0.1:36018). May 13 00:09:28.238544 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 36018 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.239831 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.245434 systemd-logind[1425]: New session 2 of user core. May 13 00:09:28.264379 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:09:28.316341 sshd[1551]: pam_unix(sshd:session): session closed for user core May 13 00:09:28.324457 systemd[1]: sshd@1-10.0.0.19:22-10.0.0.1:36018.service: Deactivated successfully. May 13 00:09:28.326459 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:09:28.328461 systemd-logind[1425]: Session 2 logged out. Waiting for processes to exit. May 13 00:09:28.328850 systemd[1]: Started sshd@2-10.0.0.19:22-10.0.0.1:36020.service - OpenSSH per-connection server daemon (10.0.0.1:36020). May 13 00:09:28.330037 systemd-logind[1425]: Removed session 2. May 13 00:09:28.363088 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 36020 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.364352 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.367855 systemd-logind[1425]: New session 3 of user core. May 13 00:09:28.377335 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:09:28.424692 sshd[1558]: pam_unix(sshd:session): session closed for user core May 13 00:09:28.438438 systemd[1]: sshd@2-10.0.0.19:22-10.0.0.1:36020.service: Deactivated successfully. May 13 00:09:28.439778 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:09:28.440951 systemd-logind[1425]: Session 3 logged out. Waiting for processes to exit. May 13 00:09:28.442011 systemd[1]: Started sshd@3-10.0.0.19:22-10.0.0.1:36034.service - OpenSSH per-connection server daemon (10.0.0.1:36034). May 13 00:09:28.442722 systemd-logind[1425]: Removed session 3. May 13 00:09:28.476138 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 36034 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.477421 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.481350 systemd-logind[1425]: New session 4 of user core. May 13 00:09:28.492339 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:09:28.545301 sshd[1565]: pam_unix(sshd:session): session closed for user core May 13 00:09:28.555400 systemd[1]: sshd@3-10.0.0.19:22-10.0.0.1:36034.service: Deactivated successfully. May 13 00:09:28.556776 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:09:28.557869 systemd-logind[1425]: Session 4 logged out. Waiting for processes to exit. May 13 00:09:28.571487 systemd[1]: Started sshd@4-10.0.0.19:22-10.0.0.1:36040.service - OpenSSH per-connection server daemon (10.0.0.1:36040). May 13 00:09:28.572237 systemd-logind[1425]: Removed session 4. May 13 00:09:28.601474 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 36040 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.602936 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.606165 systemd-logind[1425]: New session 5 of user core. May 13 00:09:28.618339 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:09:28.681479 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:09:28.681784 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:09:28.695992 sudo[1575]: pam_unix(sudo:session): session closed for user root May 13 00:09:28.697767 sshd[1572]: pam_unix(sshd:session): session closed for user core May 13 00:09:28.706590 systemd[1]: sshd@4-10.0.0.19:22-10.0.0.1:36040.service: Deactivated successfully. May 13 00:09:28.708059 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:09:28.709250 systemd-logind[1425]: Session 5 logged out. Waiting for processes to exit. May 13 00:09:28.719434 systemd[1]: Started sshd@5-10.0.0.19:22-10.0.0.1:36050.service - OpenSSH per-connection server daemon (10.0.0.1:36050). May 13 00:09:28.720175 systemd-logind[1425]: Removed session 5. May 13 00:09:28.750929 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 36050 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.752117 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.755652 systemd-logind[1425]: New session 6 of user core. May 13 00:09:28.763313 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:09:28.814106 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:09:28.814447 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:09:28.817365 sudo[1584]: pam_unix(sudo:session): session closed for user root May 13 00:09:28.821933 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:09:28.822213 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:09:28.839425 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:09:28.840703 auditctl[1587]: No rules May 13 00:09:28.841548 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:09:28.841766 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:09:28.843359 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:09:28.865367 augenrules[1605]: No rules May 13 00:09:28.866546 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:09:28.867493 sudo[1583]: pam_unix(sudo:session): session closed for user root May 13 00:09:28.868969 sshd[1580]: pam_unix(sshd:session): session closed for user core May 13 00:09:28.879442 systemd[1]: sshd@5-10.0.0.19:22-10.0.0.1:36050.service: Deactivated successfully. May 13 00:09:28.880817 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:09:28.882082 systemd-logind[1425]: Session 6 logged out. Waiting for processes to exit. May 13 00:09:28.891430 systemd[1]: Started sshd@6-10.0.0.19:22-10.0.0.1:36066.service - OpenSSH per-connection server daemon (10.0.0.1:36066). May 13 00:09:28.892264 systemd-logind[1425]: Removed session 6. May 13 00:09:28.922334 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 36066 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:09:28.923503 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:09:28.926909 systemd-logind[1425]: New session 7 of user core. May 13 00:09:28.938340 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:09:28.987917 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:09:28.988536 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:09:29.008470 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:09:29.022317 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:09:29.022528 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:09:29.423229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:29.433390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:29.452909 systemd[1]: Reloading requested from client PID 1658 ('systemctl') (unit session-7.scope)... May 13 00:09:29.452924 systemd[1]: Reloading... May 13 00:09:29.521328 zram_generator::config[1696]: No configuration found. May 13 00:09:29.666159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:09:29.718531 systemd[1]: Reloading finished in 265 ms. May 13 00:09:29.755737 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:29.757066 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:29.759582 systemd[1]: kubelet.service: Deactivated successfully. May 13 00:09:29.759783 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:29.761406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:09:29.849018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:09:29.852945 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:09:29.891818 kubelet[1743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:09:29.891818 kubelet[1743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:09:29.891818 kubelet[1743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:09:29.892279 kubelet[1743]: I0513 00:09:29.891988 1743 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:09:31.215210 kubelet[1743]: I0513 00:09:31.213927 1743 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 13 00:09:31.215210 kubelet[1743]: I0513 00:09:31.213963 1743 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:09:31.215210 kubelet[1743]: I0513 00:09:31.214208 1743 server.go:929] "Client rotation is on, will bootstrap in background" May 13 00:09:31.249536 kubelet[1743]: I0513 00:09:31.249489 1743 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:09:31.259232 kubelet[1743]: E0513 00:09:31.259176 1743 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 13 00:09:31.259232 kubelet[1743]: I0513 00:09:31.259221 1743 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 13 00:09:31.262492 kubelet[1743]: I0513 00:09:31.262470 1743 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:09:31.262795 kubelet[1743]: I0513 00:09:31.262774 1743 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 13 00:09:31.262932 kubelet[1743]: I0513 00:09:31.262888 1743 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:09:31.263095 kubelet[1743]: I0513 00:09:31.262922 1743 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.19","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 13 00:09:31.263247 kubelet[1743]: I0513 00:09:31.263235 1743 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:09:31.263283 kubelet[1743]: I0513 00:09:31.263249 1743 container_manager_linux.go:300] "Creating device plugin manager" May 13 00:09:31.263436 kubelet[1743]: I0513 00:09:31.263423 1743 state_mem.go:36] "Initialized new in-memory state store" May 13 00:09:31.265046 kubelet[1743]: I0513 00:09:31.265015 1743 kubelet.go:408] "Attempting to sync node with API server" May 13 00:09:31.265046 kubelet[1743]: I0513 00:09:31.265047 1743 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:09:31.265147 kubelet[1743]: I0513 00:09:31.265132 1743 kubelet.go:314] "Adding apiserver pod source" May 13 00:09:31.265178 kubelet[1743]: I0513 00:09:31.265150 1743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:09:31.265312 kubelet[1743]: E0513 00:09:31.265259 1743 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:31.265362 kubelet[1743]: E0513 00:09:31.265314 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:31.267155 kubelet[1743]: I0513 00:09:31.267033 1743 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:09:31.268744 kubelet[1743]: I0513 00:09:31.268725 1743 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:09:31.269566 kubelet[1743]: W0513 00:09:31.269524 1743 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:09:31.272756 kubelet[1743]: I0513 00:09:31.270229 1743 server.go:1269] "Started kubelet" May 13 00:09:31.272756 kubelet[1743]: I0513 00:09:31.270507 1743 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:09:31.272756 kubelet[1743]: I0513 00:09:31.271968 1743 server.go:460] "Adding debug handlers to kubelet server" May 13 00:09:31.272756 kubelet[1743]: I0513 00:09:31.272616 1743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:09:31.272756 kubelet[1743]: I0513 00:09:31.272682 1743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:09:31.272912 kubelet[1743]: I0513 00:09:31.272880 1743 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:09:31.273104 kubelet[1743]: I0513 00:09:31.273081 1743 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 13 00:09:31.276150 kubelet[1743]: W0513 00:09:31.275864 1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:09:31.276150 kubelet[1743]: I0513 00:09:31.276088 1743 volume_manager.go:289] "Starting Kubelet Volume Manager" May 13 00:09:31.276588 kubelet[1743]: I0513 00:09:31.276172 1743 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 13 00:09:31.276588 kubelet[1743]: E0513 00:09:31.276085 1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:09:31.276588 kubelet[1743]: I0513 00:09:31.276267 1743 reconciler.go:26] "Reconciler: start to sync state" May 13 00:09:31.276665 kubelet[1743]: I0513 00:09:31.276584 1743 factory.go:221] Registration of the systemd container factory successfully May 13 00:09:31.276693 kubelet[1743]: I0513 00:09:31.276662 1743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:09:31.277069 kubelet[1743]: E0513 00:09:31.276768 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.279065 kubelet[1743]: E0513 00:09:31.277719 1743 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:09:31.279448 kubelet[1743]: E0513 00:09:31.279324 1743 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 00:09:31.279448 kubelet[1743]: W0513 00:09:31.279379 1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:09:31.279448 kubelet[1743]: E0513 00:09:31.279397 1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 13 00:09:31.279448 kubelet[1743]: W0513 00:09:31.279448 1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:09:31.279448 kubelet[1743]: E0513 00:09:31.279460 1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.19\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 13 00:09:31.280275 kubelet[1743]: I0513 00:09:31.280094 1743 factory.go:221] Registration of the containerd container factory successfully May 13 00:09:31.288241 kubelet[1743]: I0513 00:09:31.288220 1743 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:09:31.288614 kubelet[1743]: I0513 00:09:31.288399 1743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:09:31.288614 kubelet[1743]: I0513 00:09:31.288421 1743 state_mem.go:36] "Initialized new in-memory state store" May 13 00:09:31.356119 kubelet[1743]: I0513 00:09:31.356080 1743 policy_none.go:49] "None policy: Start" May 13 00:09:31.357223 kubelet[1743]: I0513 00:09:31.357149 1743 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:09:31.357223 kubelet[1743]: I0513 00:09:31.357176 1743 state_mem.go:35] "Initializing new in-memory state store" May 13 00:09:31.370289 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:09:31.376965 kubelet[1743]: E0513 00:09:31.376937 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.379046 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:09:31.386062 kubelet[1743]: I0513 00:09:31.386017 1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:09:31.387340 kubelet[1743]: I0513 00:09:31.387243 1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:09:31.387340 kubelet[1743]: I0513 00:09:31.387267 1743 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:09:31.387340 kubelet[1743]: I0513 00:09:31.387285 1743 kubelet.go:2321] "Starting kubelet main sync loop" May 13 00:09:31.387452 kubelet[1743]: E0513 00:09:31.387377 1743 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:09:31.390514 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:09:31.391647 kubelet[1743]: I0513 00:09:31.391612 1743 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:09:31.391803 kubelet[1743]: I0513 00:09:31.391783 1743 eviction_manager.go:189] "Eviction manager: starting control loop" May 13 00:09:31.391855 kubelet[1743]: I0513 00:09:31.391794 1743 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:09:31.391994 kubelet[1743]: I0513 00:09:31.391971 1743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:09:31.395282 kubelet[1743]: E0513 00:09:31.395258 1743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.19\" not found" May 13 00:09:31.484734 kubelet[1743]: E0513 00:09:31.484602 1743 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.19\" not found" node="10.0.0.19" May 13 00:09:31.492674 kubelet[1743]: I0513 00:09:31.492644 1743 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.19" May 13 00:09:31.496729 kubelet[1743]: I0513 00:09:31.496699 1743 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.19" May 13 00:09:31.496729 kubelet[1743]: E0513 00:09:31.496726 1743 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.19\": node \"10.0.0.19\" not found" May 13 00:09:31.514089 kubelet[1743]: E0513 00:09:31.514045 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.614212 kubelet[1743]: E0513 00:09:31.614149 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.714775 kubelet[1743]: E0513 00:09:31.714744 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.815800 kubelet[1743]: E0513 00:09:31.815698 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:31.869733 sudo[1616]: pam_unix(sudo:session): session closed for user root May 13 00:09:31.871276 sshd[1613]: pam_unix(sshd:session): session closed for user core May 13 00:09:31.874093 systemd[1]: sshd@6-10.0.0.19:22-10.0.0.1:36066.service: Deactivated successfully. May 13 00:09:31.875565 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:09:31.877018 systemd-logind[1425]: Session 7 logged out. Waiting for processes to exit. May 13 00:09:31.877903 systemd-logind[1425]: Removed session 7. May 13 00:09:31.916261 kubelet[1743]: E0513 00:09:31.916223 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.016891 kubelet[1743]: E0513 00:09:32.016850 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.117521 kubelet[1743]: E0513 00:09:32.117422 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.217102 kubelet[1743]: I0513 00:09:32.217064 1743 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:09:32.217564 kubelet[1743]: W0513 00:09:32.217234 1743 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:09:32.218218 kubelet[1743]: E0513 00:09:32.218160 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.265540 kubelet[1743]: E0513 00:09:32.265503 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:32.318929 kubelet[1743]: E0513 00:09:32.318890 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.419933 kubelet[1743]: E0513 00:09:32.419829 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.520534 kubelet[1743]: E0513 00:09:32.520486 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.620630 kubelet[1743]: E0513 00:09:32.620593 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.721310 kubelet[1743]: E0513 00:09:32.721171 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.19\" not found" May 13 00:09:32.822875 kubelet[1743]: I0513 00:09:32.822848 1743 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:09:32.823187 containerd[1448]: time="2025-05-13T00:09:32.823131548Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:09:32.823472 kubelet[1743]: I0513 00:09:32.823340 1743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:09:33.266126 kubelet[1743]: I0513 00:09:33.266049 1743 apiserver.go:52] "Watching apiserver" May 13 00:09:33.266126 kubelet[1743]: E0513 00:09:33.266095 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:33.270404 kubelet[1743]: E0513 00:09:33.269833 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgq75" podUID="4939c0f9-f198-4e87-9dc5-adbf021d03cf" May 13 00:09:33.276370 systemd[1]: Created slice kubepods-besteffort-pode29a3be0_8217_4512_9ff7_5fc87bbaa230.slice - libcontainer container kubepods-besteffort-pode29a3be0_8217_4512_9ff7_5fc87bbaa230.slice. May 13 00:09:33.277155 kubelet[1743]: I0513 00:09:33.276586 1743 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 13 00:09:33.286551 kubelet[1743]: I0513 00:09:33.286511 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-xtables-lock\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286551 kubelet[1743]: I0513 00:09:33.286550 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-var-lib-calico\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286551 kubelet[1743]: I0513 00:09:33.286569 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-flexvol-driver-host\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286750 kubelet[1743]: I0513 00:09:33.286593 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d68fn\" (UniqueName: \"kubernetes.io/projected/73bd53a4-a3ab-466d-8805-a93c5258c57e-kube-api-access-d68fn\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286750 kubelet[1743]: I0513 00:09:33.286610 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4939c0f9-f198-4e87-9dc5-adbf021d03cf-registration-dir\") pod \"csi-node-driver-lgq75\" (UID: \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\") " pod="calico-system/csi-node-driver-lgq75" May 13 00:09:33.286750 kubelet[1743]: I0513 00:09:33.286628 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqt8v\" (UniqueName: \"kubernetes.io/projected/4939c0f9-f198-4e87-9dc5-adbf021d03cf-kube-api-access-hqt8v\") pod \"csi-node-driver-lgq75\" (UID: \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\") " pod="calico-system/csi-node-driver-lgq75" May 13 00:09:33.286750 kubelet[1743]: I0513 00:09:33.286644 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-var-run-calico\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286750 kubelet[1743]: I0513 00:09:33.286660 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-cni-net-dir\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286856 kubelet[1743]: I0513 00:09:33.286675 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-cni-log-dir\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286856 kubelet[1743]: I0513 00:09:33.286699 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4939c0f9-f198-4e87-9dc5-adbf021d03cf-kubelet-dir\") pod \"csi-node-driver-lgq75\" (UID: \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\") " pod="calico-system/csi-node-driver-lgq75" May 13 00:09:33.286856 kubelet[1743]: I0513 00:09:33.286714 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4939c0f9-f198-4e87-9dc5-adbf021d03cf-socket-dir\") pod \"csi-node-driver-lgq75\" (UID: \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\") " pod="calico-system/csi-node-driver-lgq75" May 13 00:09:33.286856 kubelet[1743]: I0513 00:09:33.286731 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e29a3be0-8217-4512-9ff7-5fc87bbaa230-lib-modules\") pod \"kube-proxy-5kzk6\" (UID: \"e29a3be0-8217-4512-9ff7-5fc87bbaa230\") " pod="kube-system/kube-proxy-5kzk6" May 13 00:09:33.286856 kubelet[1743]: I0513 00:09:33.286747 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/73bd53a4-a3ab-466d-8805-a93c5258c57e-tigera-ca-bundle\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286948 kubelet[1743]: I0513 00:09:33.286765 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/73bd53a4-a3ab-466d-8805-a93c5258c57e-node-certs\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286948 kubelet[1743]: I0513 00:09:33.286783 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-cni-bin-dir\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286948 kubelet[1743]: I0513 00:09:33.286798 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cjs\" (UniqueName: \"kubernetes.io/projected/e29a3be0-8217-4512-9ff7-5fc87bbaa230-kube-api-access-52cjs\") pod \"kube-proxy-5kzk6\" (UID: \"e29a3be0-8217-4512-9ff7-5fc87bbaa230\") " pod="kube-system/kube-proxy-5kzk6" May 13 00:09:33.286948 kubelet[1743]: I0513 00:09:33.286813 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-lib-modules\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.286948 kubelet[1743]: I0513 00:09:33.286832 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/73bd53a4-a3ab-466d-8805-a93c5258c57e-policysync\") pod \"calico-node-mtx68\" (UID: \"73bd53a4-a3ab-466d-8805-a93c5258c57e\") " pod="calico-system/calico-node-mtx68" May 13 00:09:33.287045 kubelet[1743]: I0513 00:09:33.286846 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4939c0f9-f198-4e87-9dc5-adbf021d03cf-varrun\") pod \"csi-node-driver-lgq75\" (UID: \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\") " pod="calico-system/csi-node-driver-lgq75" May 13 00:09:33.287045 kubelet[1743]: I0513 00:09:33.286861 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e29a3be0-8217-4512-9ff7-5fc87bbaa230-kube-proxy\") pod \"kube-proxy-5kzk6\" (UID: \"e29a3be0-8217-4512-9ff7-5fc87bbaa230\") " pod="kube-system/kube-proxy-5kzk6" May 13 00:09:33.287045 kubelet[1743]: I0513 00:09:33.286875 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e29a3be0-8217-4512-9ff7-5fc87bbaa230-xtables-lock\") pod \"kube-proxy-5kzk6\" (UID: \"e29a3be0-8217-4512-9ff7-5fc87bbaa230\") " pod="kube-system/kube-proxy-5kzk6" May 13 00:09:33.301516 systemd[1]: Created slice kubepods-besteffort-pod73bd53a4_a3ab_466d_8805_a93c5258c57e.slice - libcontainer container kubepods-besteffort-pod73bd53a4_a3ab_466d_8805_a93c5258c57e.slice. May 13 00:09:33.387778 kubelet[1743]: E0513 00:09:33.387727 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.387778 kubelet[1743]: W0513 00:09:33.387751 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.387778 kubelet[1743]: E0513 00:09:33.387779 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.388113 kubelet[1743]: E0513 00:09:33.387942 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.388113 kubelet[1743]: W0513 00:09:33.387959 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.388113 kubelet[1743]: E0513 00:09:33.387974 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.388366 kubelet[1743]: E0513 00:09:33.388175 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.388366 kubelet[1743]: W0513 00:09:33.388316 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.388366 kubelet[1743]: E0513 00:09:33.388343 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.388522 kubelet[1743]: E0513 00:09:33.388487 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.388522 kubelet[1743]: W0513 00:09:33.388504 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.388522 kubelet[1743]: E0513 00:09:33.388520 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.388665 kubelet[1743]: E0513 00:09:33.388654 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.388665 kubelet[1743]: W0513 00:09:33.388664 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.388778 kubelet[1743]: E0513 00:09:33.388741 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.388875 kubelet[1743]: E0513 00:09:33.388864 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.388875 kubelet[1743]: W0513 00:09:33.388874 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.388985 kubelet[1743]: E0513 00:09:33.388922 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.389269 kubelet[1743]: E0513 00:09:33.389239 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.389269 kubelet[1743]: W0513 00:09:33.389256 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.389674 kubelet[1743]: E0513 00:09:33.389377 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.389674 kubelet[1743]: E0513 00:09:33.389439 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.389674 kubelet[1743]: W0513 00:09:33.389447 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.389674 kubelet[1743]: E0513 00:09:33.389467 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.389674 kubelet[1743]: E0513 00:09:33.389599 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.389674 kubelet[1743]: W0513 00:09:33.389606 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.389674 kubelet[1743]: E0513 00:09:33.389626 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.389915 kubelet[1743]: E0513 00:09:33.389788 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.389915 kubelet[1743]: W0513 00:09:33.389797 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.389915 kubelet[1743]: E0513 00:09:33.389832 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390120 kubelet[1743]: E0513 00:09:33.390073 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390120 kubelet[1743]: W0513 00:09:33.390086 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390262 kubelet[1743]: E0513 00:09:33.390168 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390301 kubelet[1743]: E0513 00:09:33.390284 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390301 kubelet[1743]: W0513 00:09:33.390296 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390343 kubelet[1743]: E0513 00:09:33.390314 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390459 kubelet[1743]: E0513 00:09:33.390433 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390459 kubelet[1743]: W0513 00:09:33.390444 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390523 kubelet[1743]: E0513 00:09:33.390463 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390576 kubelet[1743]: E0513 00:09:33.390564 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390576 kubelet[1743]: W0513 00:09:33.390573 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390618 kubelet[1743]: E0513 00:09:33.390590 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390707 kubelet[1743]: E0513 00:09:33.390696 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390733 kubelet[1743]: W0513 00:09:33.390706 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390788 kubelet[1743]: E0513 00:09:33.390774 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.390841 kubelet[1743]: E0513 00:09:33.390831 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.390865 kubelet[1743]: W0513 00:09:33.390841 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.390986 kubelet[1743]: E0513 00:09:33.390905 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391066 kubelet[1743]: E0513 00:09:33.391052 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391066 kubelet[1743]: W0513 00:09:33.391064 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391150 kubelet[1743]: E0513 00:09:33.391138 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391253 kubelet[1743]: E0513 00:09:33.391243 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391279 kubelet[1743]: W0513 00:09:33.391253 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391299 kubelet[1743]: E0513 00:09:33.391286 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391455 kubelet[1743]: E0513 00:09:33.391430 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391455 kubelet[1743]: W0513 00:09:33.391442 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391513 kubelet[1743]: E0513 00:09:33.391505 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391586 kubelet[1743]: E0513 00:09:33.391574 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391586 kubelet[1743]: W0513 00:09:33.391584 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391625 kubelet[1743]: E0513 00:09:33.391605 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391741 kubelet[1743]: E0513 00:09:33.391730 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391775 kubelet[1743]: W0513 00:09:33.391740 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391831 kubelet[1743]: E0513 00:09:33.391819 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.391885 kubelet[1743]: E0513 00:09:33.391876 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.391909 kubelet[1743]: W0513 00:09:33.391885 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.391968 kubelet[1743]: E0513 00:09:33.391957 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392037 kubelet[1743]: E0513 00:09:33.392028 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392060 kubelet[1743]: W0513 00:09:33.392037 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.392174 kubelet[1743]: E0513 00:09:33.392100 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392255 kubelet[1743]: E0513 00:09:33.392244 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392255 kubelet[1743]: W0513 00:09:33.392253 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.392351 kubelet[1743]: E0513 00:09:33.392338 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392412 kubelet[1743]: E0513 00:09:33.392402 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392437 kubelet[1743]: W0513 00:09:33.392412 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.392484 kubelet[1743]: E0513 00:09:33.392472 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392548 kubelet[1743]: E0513 00:09:33.392538 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392568 kubelet[1743]: W0513 00:09:33.392548 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.392699 kubelet[1743]: E0513 00:09:33.392636 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392790 kubelet[1743]: E0513 00:09:33.392777 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392790 kubelet[1743]: W0513 00:09:33.392788 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.392873 kubelet[1743]: E0513 00:09:33.392861 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.392960 kubelet[1743]: E0513 00:09:33.392949 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.392986 kubelet[1743]: W0513 00:09:33.392961 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.393017 kubelet[1743]: E0513 00:09:33.393005 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.393177 kubelet[1743]: E0513 00:09:33.393166 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.393177 kubelet[1743]: W0513 00:09:33.393175 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.393264 kubelet[1743]: E0513 00:09:33.393251 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.393414 kubelet[1743]: E0513 00:09:33.393403 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.393441 kubelet[1743]: W0513 00:09:33.393413 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.393494 kubelet[1743]: E0513 00:09:33.393480 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.393578 kubelet[1743]: E0513 00:09:33.393565 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.393578 kubelet[1743]: W0513 00:09:33.393576 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.393633 kubelet[1743]: E0513 00:09:33.393617 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.393748 kubelet[1743]: E0513 00:09:33.393737 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.393748 kubelet[1743]: W0513 00:09:33.393746 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.393811 kubelet[1743]: E0513 00:09:33.393792 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.393934 kubelet[1743]: E0513 00:09:33.393921 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.393934 kubelet[1743]: W0513 00:09:33.393930 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.394049 kubelet[1743]: E0513 00:09:33.394013 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.394099 kubelet[1743]: E0513 00:09:33.394084 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.394099 kubelet[1743]: W0513 00:09:33.394096 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.394156 kubelet[1743]: E0513 00:09:33.394146 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.394336 kubelet[1743]: E0513 00:09:33.394321 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.394336 kubelet[1743]: W0513 00:09:33.394332 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.394778 kubelet[1743]: E0513 00:09:33.394424 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.394778 kubelet[1743]: E0513 00:09:33.394498 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.394778 kubelet[1743]: W0513 00:09:33.394506 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.394778 kubelet[1743]: E0513 00:09:33.394546 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.394778 kubelet[1743]: E0513 00:09:33.394682 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.394778 kubelet[1743]: W0513 00:09:33.394689 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.394946 kubelet[1743]: E0513 00:09:33.394786 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.395619 kubelet[1743]: E0513 00:09:33.395588 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.395619 kubelet[1743]: W0513 00:09:33.395607 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.395858 kubelet[1743]: E0513 00:09:33.395842 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.395858 kubelet[1743]: W0513 00:09:33.395855 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.396021 kubelet[1743]: E0513 00:09:33.395965 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.396021 kubelet[1743]: E0513 00:09:33.395997 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.396091 kubelet[1743]: E0513 00:09:33.396050 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.396091 kubelet[1743]: W0513 00:09:33.396059 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.396174 kubelet[1743]: E0513 00:09:33.396160 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.396278 kubelet[1743]: E0513 00:09:33.396265 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.396278 kubelet[1743]: W0513 00:09:33.396278 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.396572 kubelet[1743]: E0513 00:09:33.396324 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.397132 kubelet[1743]: E0513 00:09:33.397110 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.397132 kubelet[1743]: W0513 00:09:33.397126 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.397234 kubelet[1743]: E0513 00:09:33.397146 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.398317 kubelet[1743]: E0513 00:09:33.398284 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.398317 kubelet[1743]: W0513 00:09:33.398305 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.398317 kubelet[1743]: E0513 00:09:33.398320 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.400282 kubelet[1743]: E0513 00:09:33.400259 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.400282 kubelet[1743]: W0513 00:09:33.400276 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.400366 kubelet[1743]: E0513 00:09:33.400290 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.401250 kubelet[1743]: E0513 00:09:33.400940 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.401250 kubelet[1743]: W0513 00:09:33.400953 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.401250 kubelet[1743]: E0513 00:09:33.400966 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.407774 kubelet[1743]: E0513 00:09:33.407706 1743 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:09:33.407774 kubelet[1743]: W0513 00:09:33.407725 1743 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:09:33.407774 kubelet[1743]: E0513 00:09:33.407740 1743 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:09:33.599228 kubelet[1743]: E0513 00:09:33.599077 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:33.599983 containerd[1448]: time="2025-05-13T00:09:33.599944478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5kzk6,Uid:e29a3be0-8217-4512-9ff7-5fc87bbaa230,Namespace:kube-system,Attempt:0,}" May 13 00:09:33.604211 kubelet[1743]: E0513 00:09:33.604142 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:33.604591 containerd[1448]: time="2025-05-13T00:09:33.604558961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtx68,Uid:73bd53a4-a3ab-466d-8805-a93c5258c57e,Namespace:calico-system,Attempt:0,}" May 13 00:09:34.160669 containerd[1448]: time="2025-05-13T00:09:34.160623470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:09:34.161231 containerd[1448]: time="2025-05-13T00:09:34.161161481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:09:34.162204 containerd[1448]: time="2025-05-13T00:09:34.161897764Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:09:34.163271 containerd[1448]: time="2025-05-13T00:09:34.163132989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:09:34.163271 containerd[1448]: time="2025-05-13T00:09:34.163214531Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:09:34.166702 containerd[1448]: time="2025-05-13T00:09:34.166653400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:09:34.168174 containerd[1448]: time="2025-05-13T00:09:34.167750439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.117298ms" May 13 00:09:34.169323 containerd[1448]: time="2025-05-13T00:09:34.169286053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.244857ms" May 13 00:09:34.261650 containerd[1448]: time="2025-05-13T00:09:34.261559204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:34.261650 containerd[1448]: time="2025-05-13T00:09:34.261616528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:34.261802 containerd[1448]: time="2025-05-13T00:09:34.261664244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:34.262100 containerd[1448]: time="2025-05-13T00:09:34.262042654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:34.262129 containerd[1448]: time="2025-05-13T00:09:34.262096735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:34.262219 containerd[1448]: time="2025-05-13T00:09:34.262195651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:34.262831 containerd[1448]: time="2025-05-13T00:09:34.262689148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:34.264606 containerd[1448]: time="2025-05-13T00:09:34.264518226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:34.266508 kubelet[1743]: E0513 00:09:34.266477 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:34.349381 systemd[1]: Started cri-containerd-78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62.scope - libcontainer container 78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62. May 13 00:09:34.350821 systemd[1]: Started cri-containerd-91b4ca1775a807d1cf3e5af595f9bb8ecbb68adf0d8ec5d538cd57d8b63c13e5.scope - libcontainer container 91b4ca1775a807d1cf3e5af595f9bb8ecbb68adf0d8ec5d538cd57d8b63c13e5. May 13 00:09:34.367503 containerd[1448]: time="2025-05-13T00:09:34.367463697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtx68,Uid:73bd53a4-a3ab-466d-8805-a93c5258c57e,Namespace:calico-system,Attempt:0,} returns sandbox id \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\"" May 13 00:09:34.368867 kubelet[1743]: E0513 00:09:34.368843 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:34.370436 containerd[1448]: time="2025-05-13T00:09:34.370392096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:09:34.375523 containerd[1448]: time="2025-05-13T00:09:34.375494517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5kzk6,Uid:e29a3be0-8217-4512-9ff7-5fc87bbaa230,Namespace:kube-system,Attempt:0,} returns sandbox id \"91b4ca1775a807d1cf3e5af595f9bb8ecbb68adf0d8ec5d538cd57d8b63c13e5\"" May 13 00:09:34.376959 kubelet[1743]: E0513 00:09:34.376933 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:34.400577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4289925721.mount: Deactivated successfully. May 13 00:09:35.267491 kubelet[1743]: E0513 00:09:35.267434 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:35.388891 kubelet[1743]: E0513 00:09:35.388491 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgq75" podUID="4939c0f9-f198-4e87-9dc5-adbf021d03cf" May 13 00:09:35.438851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1129278371.mount: Deactivated successfully. May 13 00:09:35.492153 containerd[1448]: time="2025-05-13T00:09:35.492106780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:35.493115 containerd[1448]: time="2025-05-13T00:09:35.493085322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 13 00:09:35.493742 containerd[1448]: time="2025-05-13T00:09:35.493721257Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:35.495531 containerd[1448]: time="2025-05-13T00:09:35.495505737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:35.496893 containerd[1448]: time="2025-05-13T00:09:35.496864190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 1.12641277s" May 13 00:09:35.496956 containerd[1448]: time="2025-05-13T00:09:35.496893852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:09:35.498339 containerd[1448]: time="2025-05-13T00:09:35.498315271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 13 00:09:35.499241 containerd[1448]: time="2025-05-13T00:09:35.499212193Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:09:35.510504 containerd[1448]: time="2025-05-13T00:09:35.510465500Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824\"" May 13 00:09:35.511065 containerd[1448]: time="2025-05-13T00:09:35.510975346Z" level=info msg="StartContainer for \"5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824\"" May 13 00:09:35.542342 systemd[1]: Started cri-containerd-5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824.scope - libcontainer container 5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824. May 13 00:09:35.564721 containerd[1448]: time="2025-05-13T00:09:35.564678242Z" level=info msg="StartContainer for \"5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824\" returns successfully" May 13 00:09:35.583291 systemd[1]: cri-containerd-5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824.scope: Deactivated successfully. May 13 00:09:35.632638 containerd[1448]: time="2025-05-13T00:09:35.632577634Z" level=info msg="shim disconnected" id=5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824 namespace=k8s.io May 13 00:09:35.632638 containerd[1448]: time="2025-05-13T00:09:35.632631393Z" level=warning msg="cleaning up after shim disconnected" id=5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824 namespace=k8s.io May 13 00:09:35.632638 containerd[1448]: time="2025-05-13T00:09:35.632647925Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:09:35.641454 containerd[1448]: time="2025-05-13T00:09:35.641407844Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:09:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:09:36.267721 kubelet[1743]: E0513 00:09:36.267598 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:36.398041 kubelet[1743]: E0513 00:09:36.398009 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:36.421897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dd93d1e2d9ff7d87a2babe4eb3bc71ca7168f251031fd9f079c10e6fef54824-rootfs.mount: Deactivated successfully. May 13 00:09:36.429548 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969373955.mount: Deactivated successfully. May 13 00:09:36.644268 containerd[1448]: time="2025-05-13T00:09:36.644110318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.644807 containerd[1448]: time="2025-05-13T00:09:36.644767520Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 13 00:09:36.645612 containerd[1448]: time="2025-05-13T00:09:36.645583908Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.647454 containerd[1448]: time="2025-05-13T00:09:36.647425066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:36.648277 containerd[1448]: time="2025-05-13T00:09:36.648244496Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.149899685s" May 13 00:09:36.648316 containerd[1448]: time="2025-05-13T00:09:36.648280761Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 13 00:09:36.649495 containerd[1448]: time="2025-05-13T00:09:36.649470000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:09:36.650612 containerd[1448]: time="2025-05-13T00:09:36.650531754Z" level=info msg="CreateContainer within sandbox \"91b4ca1775a807d1cf3e5af595f9bb8ecbb68adf0d8ec5d538cd57d8b63c13e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:09:36.665413 containerd[1448]: time="2025-05-13T00:09:36.665339986Z" level=info msg="CreateContainer within sandbox \"91b4ca1775a807d1cf3e5af595f9bb8ecbb68adf0d8ec5d538cd57d8b63c13e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c3ab5ee3508e0ae39b79e714b1c17046062571f47b122efcee3bdea91c6999af\"" May 13 00:09:36.665970 containerd[1448]: time="2025-05-13T00:09:36.665804578Z" level=info msg="StartContainer for \"c3ab5ee3508e0ae39b79e714b1c17046062571f47b122efcee3bdea91c6999af\"" May 13 00:09:36.690326 systemd[1]: Started cri-containerd-c3ab5ee3508e0ae39b79e714b1c17046062571f47b122efcee3bdea91c6999af.scope - libcontainer container c3ab5ee3508e0ae39b79e714b1c17046062571f47b122efcee3bdea91c6999af. May 13 00:09:36.711454 containerd[1448]: time="2025-05-13T00:09:36.711409787Z" level=info msg="StartContainer for \"c3ab5ee3508e0ae39b79e714b1c17046062571f47b122efcee3bdea91c6999af\" returns successfully" May 13 00:09:37.268574 kubelet[1743]: E0513 00:09:37.268528 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:37.388738 kubelet[1743]: E0513 00:09:37.388413 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgq75" podUID="4939c0f9-f198-4e87-9dc5-adbf021d03cf" May 13 00:09:37.403394 kubelet[1743]: E0513 00:09:37.403312 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:37.410201 kubelet[1743]: I0513 00:09:37.410127 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5kzk6" podStartSLOduration=4.138543745 podStartE2EDuration="6.410113233s" podCreationTimestamp="2025-05-13 00:09:31 +0000 UTC" firstStartedPulling="2025-05-13 00:09:34.377413505 +0000 UTC m=+4.521389947" lastFinishedPulling="2025-05-13 00:09:36.648983033 +0000 UTC m=+6.792959435" observedRunningTime="2025-05-13 00:09:37.410082814 +0000 UTC m=+7.554059256" watchObservedRunningTime="2025-05-13 00:09:37.410113233 +0000 UTC m=+7.554089675" May 13 00:09:38.268751 kubelet[1743]: E0513 00:09:38.268710 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:38.403662 kubelet[1743]: E0513 00:09:38.403216 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:38.725978 containerd[1448]: time="2025-05-13T00:09:38.725872470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:38.726689 containerd[1448]: time="2025-05-13T00:09:38.726374647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:09:38.727477 containerd[1448]: time="2025-05-13T00:09:38.727424107Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:38.729893 containerd[1448]: time="2025-05-13T00:09:38.729203678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:38.731235 containerd[1448]: time="2025-05-13T00:09:38.730481113Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.080979092s" May 13 00:09:38.731235 containerd[1448]: time="2025-05-13T00:09:38.730509409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:09:38.732824 containerd[1448]: time="2025-05-13T00:09:38.732790797Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:09:38.745549 containerd[1448]: time="2025-05-13T00:09:38.745496943Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785\"" May 13 00:09:38.745996 containerd[1448]: time="2025-05-13T00:09:38.745928518Z" level=info msg="StartContainer for \"f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785\"" May 13 00:09:38.781379 systemd[1]: Started cri-containerd-f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785.scope - libcontainer container f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785. May 13 00:09:38.804896 containerd[1448]: time="2025-05-13T00:09:38.804832914Z" level=info msg="StartContainer for \"f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785\" returns successfully" May 13 00:09:39.268841 kubelet[1743]: E0513 00:09:39.268802 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:39.286384 systemd[1]: cri-containerd-f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785.scope: Deactivated successfully. May 13 00:09:39.301351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785-rootfs.mount: Deactivated successfully. May 13 00:09:39.357353 kubelet[1743]: I0513 00:09:39.356772 1743 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 13 00:09:39.393244 systemd[1]: Created slice kubepods-besteffort-pod4939c0f9_f198_4e87_9dc5_adbf021d03cf.slice - libcontainer container kubepods-besteffort-pod4939c0f9_f198_4e87_9dc5_adbf021d03cf.slice. May 13 00:09:39.395236 containerd[1448]: time="2025-05-13T00:09:39.395160419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgq75,Uid:4939c0f9-f198-4e87-9dc5-adbf021d03cf,Namespace:calico-system,Attempt:0,}" May 13 00:09:39.406542 kubelet[1743]: E0513 00:09:39.406406 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:39.503984 containerd[1448]: time="2025-05-13T00:09:39.503926296Z" level=info msg="shim disconnected" id=f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785 namespace=k8s.io May 13 00:09:39.503984 containerd[1448]: time="2025-05-13T00:09:39.503979165Z" level=warning msg="cleaning up after shim disconnected" id=f83ca54355f481d438635303fe1c9acf4c2e140abeacd218599a1a811e75a785 namespace=k8s.io May 13 00:09:39.503984 containerd[1448]: time="2025-05-13T00:09:39.503987850Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:09:39.610795 containerd[1448]: time="2025-05-13T00:09:39.610656446Z" level=error msg="Failed to destroy network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:39.611212 containerd[1448]: time="2025-05-13T00:09:39.611031774Z" level=error msg="encountered an error cleaning up failed sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:39.611212 containerd[1448]: time="2025-05-13T00:09:39.611094009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgq75,Uid:4939c0f9-f198-4e87-9dc5-adbf021d03cf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:39.611501 kubelet[1743]: E0513 00:09:39.611332 1743 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:39.611501 kubelet[1743]: E0513 00:09:39.611401 1743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgq75" May 13 00:09:39.611501 kubelet[1743]: E0513 00:09:39.611424 1743 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgq75" May 13 00:09:39.611715 kubelet[1743]: E0513 00:09:39.611467 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgq75_calico-system(4939c0f9-f198-4e87-9dc5-adbf021d03cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgq75_calico-system(4939c0f9-f198-4e87-9dc5-adbf021d03cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgq75" podUID="4939c0f9-f198-4e87-9dc5-adbf021d03cf" May 13 00:09:40.269495 kubelet[1743]: E0513 00:09:40.269449 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:40.409281 kubelet[1743]: E0513 00:09:40.409242 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:40.410064 containerd[1448]: time="2025-05-13T00:09:40.409924212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:09:40.411227 kubelet[1743]: I0513 00:09:40.411207 1743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:40.412052 containerd[1448]: time="2025-05-13T00:09:40.411783297Z" level=info msg="StopPodSandbox for \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\"" May 13 00:09:40.412052 containerd[1448]: time="2025-05-13T00:09:40.411936457Z" level=info msg="Ensure that sandbox fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3 in task-service has been cleanup successfully" May 13 00:09:40.434099 containerd[1448]: time="2025-05-13T00:09:40.434052740Z" level=error msg="StopPodSandbox for \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\" failed" error="failed to destroy network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:40.434310 kubelet[1743]: E0513 00:09:40.434263 1743 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:40.434369 kubelet[1743]: E0513 00:09:40.434320 1743 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3"} May 13 00:09:40.434407 kubelet[1743]: E0513 00:09:40.434384 1743 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:09:40.434465 kubelet[1743]: E0513 00:09:40.434417 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4939c0f9-f198-4e87-9dc5-adbf021d03cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgq75" podUID="4939c0f9-f198-4e87-9dc5-adbf021d03cf" May 13 00:09:41.269592 kubelet[1743]: E0513 00:09:41.269550 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:42.270750 kubelet[1743]: E0513 00:09:42.270714 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:43.100104 systemd[1]: Created slice kubepods-besteffort-podafc5033d_9818_4efc_a9ae_4d52c05302e0.slice - libcontainer container kubepods-besteffort-podafc5033d_9818_4efc_a9ae_4d52c05302e0.slice. May 13 00:09:43.243568 kubelet[1743]: I0513 00:09:43.243516 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j466d\" (UniqueName: \"kubernetes.io/projected/afc5033d-9818-4efc-a9ae-4d52c05302e0-kube-api-access-j466d\") pod \"nginx-deployment-8587fbcb89-2tphb\" (UID: \"afc5033d-9818-4efc-a9ae-4d52c05302e0\") " pod="default/nginx-deployment-8587fbcb89-2tphb" May 13 00:09:43.271194 kubelet[1743]: E0513 00:09:43.271076 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:43.404698 containerd[1448]: time="2025-05-13T00:09:43.404562554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2tphb,Uid:afc5033d-9818-4efc-a9ae-4d52c05302e0,Namespace:default,Attempt:0,}" May 13 00:09:43.480882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057387866.mount: Deactivated successfully. May 13 00:09:43.586734 containerd[1448]: time="2025-05-13T00:09:43.586674115Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:43.588851 containerd[1448]: time="2025-05-13T00:09:43.588815591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:09:43.593793 containerd[1448]: time="2025-05-13T00:09:43.593749542Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:43.597293 containerd[1448]: time="2025-05-13T00:09:43.597249320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:43.598179 containerd[1448]: time="2025-05-13T00:09:43.597816722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.18785113s" May 13 00:09:43.598179 containerd[1448]: time="2025-05-13T00:09:43.597851337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:09:43.606387 containerd[1448]: time="2025-05-13T00:09:43.606353055Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:09:43.619002 containerd[1448]: time="2025-05-13T00:09:43.618949805Z" level=info msg="CreateContainer within sandbox \"78604d30230afbe675ceda189791cdaddee57b99dbd23ac36a1b038e99811d62\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f2a5175d7f0e6ffc63f778c215836a8ffa7624de8067e551e12615adeb6ffcca\"" May 13 00:09:43.619479 containerd[1448]: time="2025-05-13T00:09:43.619421566Z" level=info msg="StartContainer for \"f2a5175d7f0e6ffc63f778c215836a8ffa7624de8067e551e12615adeb6ffcca\"" May 13 00:09:43.638166 containerd[1448]: time="2025-05-13T00:09:43.637414745Z" level=error msg="Failed to destroy network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:43.639692 containerd[1448]: time="2025-05-13T00:09:43.639648021Z" level=error msg="encountered an error cleaning up failed sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:43.639796 containerd[1448]: time="2025-05-13T00:09:43.639714089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2tphb,Uid:afc5033d-9818-4efc-a9ae-4d52c05302e0,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:43.640150 kubelet[1743]: E0513 00:09:43.639962 1743 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:09:43.640150 kubelet[1743]: E0513 00:09:43.640025 1743 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2tphb" May 13 00:09:43.640150 kubelet[1743]: E0513 00:09:43.640043 1743 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-2tphb" May 13 00:09:43.640288 kubelet[1743]: E0513 00:09:43.640085 1743 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-2tphb_default(afc5033d-9818-4efc-a9ae-4d52c05302e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-2tphb_default(afc5033d-9818-4efc-a9ae-4d52c05302e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-2tphb" podUID="afc5033d-9818-4efc-a9ae-4d52c05302e0" May 13 00:09:43.647403 systemd[1]: Started cri-containerd-f2a5175d7f0e6ffc63f778c215836a8ffa7624de8067e551e12615adeb6ffcca.scope - libcontainer container f2a5175d7f0e6ffc63f778c215836a8ffa7624de8067e551e12615adeb6ffcca. May 13 00:09:43.672275 containerd[1448]: time="2025-05-13T00:09:43.671686169Z" level=info msg="StartContainer for \"f2a5175d7f0e6ffc63f778c215836a8ffa7624de8067e551e12615adeb6ffcca\" returns successfully" May 13 00:09:43.816768 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:09:43.816896 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:09:44.271933 kubelet[1743]: E0513 00:09:44.271898 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:44.356299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121-shm.mount: Deactivated successfully. May 13 00:09:44.423355 kubelet[1743]: E0513 00:09:44.422993 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:44.423853 kubelet[1743]: I0513 00:09:44.423836 1743 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" May 13 00:09:44.424427 containerd[1448]: time="2025-05-13T00:09:44.424395799Z" level=info msg="StopPodSandbox for \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\"" May 13 00:09:44.424704 containerd[1448]: time="2025-05-13T00:09:44.424540458Z" level=info msg="Ensure that sandbox d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121 in task-service has been cleanup successfully" May 13 00:09:44.467853 kubelet[1743]: I0513 00:09:44.467741 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtx68" podStartSLOduration=4.238438083 podStartE2EDuration="13.467721259s" podCreationTimestamp="2025-05-13 00:09:31 +0000 UTC" firstStartedPulling="2025-05-13 00:09:34.369813614 +0000 UTC m=+4.513790056" lastFinishedPulling="2025-05-13 00:09:43.59909679 +0000 UTC m=+13.743073232" observedRunningTime="2025-05-13 00:09:44.437152677 +0000 UTC m=+14.581129079" watchObservedRunningTime="2025-05-13 00:09:44.467721259 +0000 UTC m=+14.611697701" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.467 [INFO][2441] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.467 [INFO][2441] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" iface="eth0" netns="/var/run/netns/cni-3a0ed7f9-c765-daeb-ccad-1637ab3668ad" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.468 [INFO][2441] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" iface="eth0" netns="/var/run/netns/cni-3a0ed7f9-c765-daeb-ccad-1637ab3668ad" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.468 [INFO][2441] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" iface="eth0" netns="/var/run/netns/cni-3a0ed7f9-c765-daeb-ccad-1637ab3668ad" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.468 [INFO][2441] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.468 [INFO][2441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.541 [INFO][2450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" HandleID="k8s-pod-network.d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.541 [INFO][2450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.541 [INFO][2450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.552 [WARNING][2450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" HandleID="k8s-pod-network.d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.552 [INFO][2450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" HandleID="k8s-pod-network.d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.553 [INFO][2450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:44.556955 containerd[1448]: 2025-05-13 00:09:44.555 [INFO][2441] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121" May 13 00:09:44.558248 systemd[1]: run-netns-cni\x2d3a0ed7f9\x2dc765\x2ddaeb\x2dccad\x2d1637ab3668ad.mount: Deactivated successfully. May 13 00:09:44.558391 containerd[1448]: time="2025-05-13T00:09:44.558249414Z" level=info msg="TearDown network for sandbox \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\" successfully" May 13 00:09:44.558391 containerd[1448]: time="2025-05-13T00:09:44.558288989Z" level=info msg="StopPodSandbox for \"d8cbf71b6e5fafe0cd6d025493748b041b05141df9659d4f219281d9e2400121\" returns successfully" May 13 00:09:44.559296 containerd[1448]: time="2025-05-13T00:09:44.559268502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2tphb,Uid:afc5033d-9818-4efc-a9ae-4d52c05302e0,Namespace:default,Attempt:1,}" May 13 00:09:44.665580 systemd-networkd[1389]: calif9a9c5d95e1: Link UP May 13 00:09:44.665738 systemd-networkd[1389]: calif9a9c5d95e1: Gained carrier May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.589 [INFO][2459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.601 [INFO][2459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0 nginx-deployment-8587fbcb89- default afc5033d-9818-4efc-a9ae-4d52c05302e0 993 0 2025-05-13 00:09:43 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 nginx-deployment-8587fbcb89-2tphb eth0 default [] [] [kns.default ksa.default.default] calif9a9c5d95e1 [] []}} ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.601 [INFO][2459] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.625 [INFO][2473] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" HandleID="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.636 [INFO][2473] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" HandleID="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000312710), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"nginx-deployment-8587fbcb89-2tphb", "timestamp":"2025-05-13 00:09:44.625811035 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.636 [INFO][2473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.636 [INFO][2473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.636 [INFO][2473] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.638 [INFO][2473] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.642 [INFO][2473] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.646 [INFO][2473] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.647 [INFO][2473] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.649 [INFO][2473] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.649 [INFO][2473] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.651 [INFO][2473] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.654 [INFO][2473] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.659 [INFO][2473] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.1/26] block=192.168.37.0/26 handle="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.659 [INFO][2473] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.1/26] handle="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" host="10.0.0.19" May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.659 [INFO][2473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:44.673667 containerd[1448]: 2025-05-13 00:09:44.659 [INFO][2473] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.1/26] IPv6=[] ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" HandleID="k8s-pod-network.8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Workload="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.660 [INFO][2459] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"afc5033d-9818-4efc-a9ae-4d52c05302e0", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-2tphb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif9a9c5d95e1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.660 [INFO][2459] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.1/32] ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.660 [INFO][2459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9a9c5d95e1 ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.665 [INFO][2459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.666 [INFO][2459] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"afc5033d-9818-4efc-a9ae-4d52c05302e0", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b", Pod:"nginx-deployment-8587fbcb89-2tphb", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calif9a9c5d95e1", MAC:"26:7a:43:b1:e2:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:44.674324 containerd[1448]: 2025-05-13 00:09:44.671 [INFO][2459] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b" Namespace="default" Pod="nginx-deployment-8587fbcb89-2tphb" WorkloadEndpoint="10.0.0.19-k8s-nginx--deployment--8587fbcb89--2tphb-eth0" May 13 00:09:44.688168 containerd[1448]: time="2025-05-13T00:09:44.688082175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:44.688168 containerd[1448]: time="2025-05-13T00:09:44.688142039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:44.688168 containerd[1448]: time="2025-05-13T00:09:44.688165448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:44.688322 containerd[1448]: time="2025-05-13T00:09:44.688256965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:44.708370 systemd[1]: Started cri-containerd-8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b.scope - libcontainer container 8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b. May 13 00:09:44.716958 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:44.731649 containerd[1448]: time="2025-05-13T00:09:44.731610556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-2tphb,Uid:afc5033d-9818-4efc-a9ae-4d52c05302e0,Namespace:default,Attempt:1,} returns sandbox id \"8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b\"" May 13 00:09:44.733247 containerd[1448]: time="2025-05-13T00:09:44.733165620Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:09:45.189211 kernel: bpftool[2661]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:09:45.273033 kubelet[1743]: E0513 00:09:45.272972 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:45.326451 systemd-networkd[1389]: vxlan.calico: Link UP May 13 00:09:45.326459 systemd-networkd[1389]: vxlan.calico: Gained carrier May 13 00:09:45.427403 kubelet[1743]: I0513 00:09:45.427359 1743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:09:45.428103 kubelet[1743]: E0513 00:09:45.427747 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:46.273629 kubelet[1743]: E0513 00:09:46.273593 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:46.430246 kubelet[1743]: E0513 00:09:46.430213 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:09:46.546211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3383843025.mount: Deactivated successfully. May 13 00:09:46.566017 systemd-networkd[1389]: calif9a9c5d95e1: Gained IPv6LL May 13 00:09:46.756294 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL May 13 00:09:47.274545 kubelet[1743]: E0513 00:09:47.274496 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:47.395363 containerd[1448]: time="2025-05-13T00:09:47.395178464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:47.396137 containerd[1448]: time="2025-05-13T00:09:47.395960322Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 00:09:47.396891 containerd[1448]: time="2025-05-13T00:09:47.396831090Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:47.400021 containerd[1448]: time="2025-05-13T00:09:47.399969128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:47.400998 containerd[1448]: time="2025-05-13T00:09:47.400958775Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.667744096s" May 13 00:09:47.401050 containerd[1448]: time="2025-05-13T00:09:47.400995867Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:09:47.403025 containerd[1448]: time="2025-05-13T00:09:47.402987925Z" level=info msg="CreateContainer within sandbox \"8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:09:47.419164 containerd[1448]: time="2025-05-13T00:09:47.419067000Z" level=info msg="CreateContainer within sandbox \"8ff8eee48aaf4aca14cf2f82dd39175e58557e95f073421db6c3636a8c4bfa1b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"c6a3dd9c28785be076d523a26f3ea65245a436f16c09979f6d2205fceae9f8c7\"" May 13 00:09:47.419852 containerd[1448]: time="2025-05-13T00:09:47.419589213Z" level=info msg="StartContainer for \"c6a3dd9c28785be076d523a26f3ea65245a436f16c09979f6d2205fceae9f8c7\"" May 13 00:09:47.526394 systemd[1]: Started cri-containerd-c6a3dd9c28785be076d523a26f3ea65245a436f16c09979f6d2205fceae9f8c7.scope - libcontainer container c6a3dd9c28785be076d523a26f3ea65245a436f16c09979f6d2205fceae9f8c7. May 13 00:09:47.556471 containerd[1448]: time="2025-05-13T00:09:47.556427245Z" level=info msg="StartContainer for \"c6a3dd9c28785be076d523a26f3ea65245a436f16c09979f6d2205fceae9f8c7\" returns successfully" May 13 00:09:48.275126 kubelet[1743]: E0513 00:09:48.275079 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:48.465163 kubelet[1743]: I0513 00:09:48.465112 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-2tphb" podStartSLOduration=2.795952499 podStartE2EDuration="5.465079723s" podCreationTimestamp="2025-05-13 00:09:43 +0000 UTC" firstStartedPulling="2025-05-13 00:09:44.732778104 +0000 UTC m=+14.876754546" lastFinishedPulling="2025-05-13 00:09:47.401905328 +0000 UTC m=+17.545881770" observedRunningTime="2025-05-13 00:09:48.464958926 +0000 UTC m=+18.608935368" watchObservedRunningTime="2025-05-13 00:09:48.465079723 +0000 UTC m=+18.609056165" May 13 00:09:49.275769 kubelet[1743]: E0513 00:09:49.275725 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:50.276797 kubelet[1743]: E0513 00:09:50.276753 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:50.447496 systemd[1]: Created slice kubepods-besteffort-podff769dbd_1435_418e_8ffd_70ebd8373c7a.slice - libcontainer container kubepods-besteffort-podff769dbd_1435_418e_8ffd_70ebd8373c7a.slice. May 13 00:09:50.485782 kubelet[1743]: I0513 00:09:50.485733 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ff769dbd-1435-418e-8ffd-70ebd8373c7a-data\") pod \"nfs-server-provisioner-0\" (UID: \"ff769dbd-1435-418e-8ffd-70ebd8373c7a\") " pod="default/nfs-server-provisioner-0" May 13 00:09:50.485782 kubelet[1743]: I0513 00:09:50.485779 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blh98\" (UniqueName: \"kubernetes.io/projected/ff769dbd-1435-418e-8ffd-70ebd8373c7a-kube-api-access-blh98\") pod \"nfs-server-provisioner-0\" (UID: \"ff769dbd-1435-418e-8ffd-70ebd8373c7a\") " pod="default/nfs-server-provisioner-0" May 13 00:09:50.750273 containerd[1448]: time="2025-05-13T00:09:50.750120922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ff769dbd-1435-418e-8ffd-70ebd8373c7a,Namespace:default,Attempt:0,}" May 13 00:09:50.884880 systemd-networkd[1389]: cali60e51b789ff: Link UP May 13 00:09:50.885213 systemd-networkd[1389]: cali60e51b789ff: Gained carrier May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.800 [INFO][2898] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ff769dbd-1435-418e-8ffd-70ebd8373c7a 1045 0 2025-05-13 00:09:50 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.19 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.801 [INFO][2898] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.839 [INFO][2912] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" HandleID="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.852 [INFO][2912] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" HandleID="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aaf30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-13 00:09:50.839671675 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.852 [INFO][2912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.852 [INFO][2912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.852 [INFO][2912] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.854 [INFO][2912] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.858 [INFO][2912] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.863 [INFO][2912] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.865 [INFO][2912] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.869 [INFO][2912] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.869 [INFO][2912] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.871 [INFO][2912] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921 May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.875 [INFO][2912] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.880 [INFO][2912] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.2/26] block=192.168.37.0/26 handle="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.880 [INFO][2912] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.2/26] handle="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" host="10.0.0.19" May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.880 [INFO][2912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:50.901637 containerd[1448]: 2025-05-13 00:09:50.880 [INFO][2912] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.2/26] IPv6=[] ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" HandleID="k8s-pod-network.db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Workload="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.902617 containerd[1448]: 2025-05-13 00:09:50.882 [INFO][2898] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ff769dbd-1435-418e-8ffd-70ebd8373c7a", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:50.902617 containerd[1448]: 2025-05-13 00:09:50.882 [INFO][2898] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.2/32] ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.902617 containerd[1448]: 2025-05-13 00:09:50.883 [INFO][2898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.902617 containerd[1448]: 2025-05-13 00:09:50.885 [INFO][2898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.902822 containerd[1448]: 2025-05-13 00:09:50.885 [INFO][2898] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ff769dbd-1435-418e-8ffd-70ebd8373c7a", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.37.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ea:51:c6:7c:ba:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:50.902822 containerd[1448]: 2025-05-13 00:09:50.899 [INFO][2898] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.19-k8s-nfs--server--provisioner--0-eth0" May 13 00:09:50.918490 containerd[1448]: time="2025-05-13T00:09:50.918399039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:50.918490 containerd[1448]: time="2025-05-13T00:09:50.918458935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:50.918490 containerd[1448]: time="2025-05-13T00:09:50.918474859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:50.918727 containerd[1448]: time="2025-05-13T00:09:50.918545398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:50.941415 systemd[1]: Started cri-containerd-db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921.scope - libcontainer container db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921. May 13 00:09:50.953747 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:51.005744 containerd[1448]: time="2025-05-13T00:09:51.005644887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ff769dbd-1435-418e-8ffd-70ebd8373c7a,Namespace:default,Attempt:0,} returns sandbox id \"db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921\"" May 13 00:09:51.008812 containerd[1448]: time="2025-05-13T00:09:51.008734556Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:09:51.266312 kubelet[1743]: E0513 00:09:51.266177 1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:51.277734 kubelet[1743]: E0513 00:09:51.277695 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:52.278464 kubelet[1743]: E0513 00:09:52.278422 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:52.772947 systemd-networkd[1389]: cali60e51b789ff: Gained IPv6LL May 13 00:09:53.279124 kubelet[1743]: E0513 00:09:53.279081 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:54.280018 kubelet[1743]: E0513 00:09:54.279957 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:54.388324 containerd[1448]: time="2025-05-13T00:09:54.388274747Z" level=info msg="StopPodSandbox for \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\"" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" iface="eth0" netns="/var/run/netns/cni-e1b77d6a-3ccd-864b-6396-702b6b5b13a1" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" iface="eth0" netns="/var/run/netns/cni-e1b77d6a-3ccd-864b-6396-702b6b5b13a1" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" iface="eth0" netns="/var/run/netns/cni-e1b77d6a-3ccd-864b-6396-702b6b5b13a1" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.441 [INFO][2999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.459 [INFO][3009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" HandleID="k8s-pod-network.fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.460 [INFO][3009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.460 [INFO][3009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.473 [WARNING][3009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" HandleID="k8s-pod-network.fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.473 [INFO][3009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" HandleID="k8s-pod-network.fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.475 [INFO][3009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:54.479380 containerd[1448]: 2025-05-13 00:09:54.477 [INFO][2999] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3" May 13 00:09:54.481461 containerd[1448]: time="2025-05-13T00:09:54.481255393Z" level=info msg="TearDown network for sandbox \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\" successfully" May 13 00:09:54.481461 containerd[1448]: time="2025-05-13T00:09:54.481297601Z" level=info msg="StopPodSandbox for \"fc49ebaf6fe68955e0a8176b17f613f41c652a0818da43572193f4f627048bf3\" returns successfully" May 13 00:09:54.482649 containerd[1448]: time="2025-05-13T00:09:54.482614719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgq75,Uid:4939c0f9-f198-4e87-9dc5-adbf021d03cf,Namespace:calico-system,Attempt:1,}" May 13 00:09:54.483389 systemd[1]: run-netns-cni\x2de1b77d6a\x2d3ccd\x2d864b\x2d6396\x2d702b6b5b13a1.mount: Deactivated successfully. May 13 00:09:54.631982 systemd-networkd[1389]: cali56f36fb9d6b: Link UP May 13 00:09:54.632908 systemd-networkd[1389]: cali56f36fb9d6b: Gained carrier May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.543 [INFO][3017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-csi--node--driver--lgq75-eth0 csi-node-driver- calico-system 4939c0f9-f198-4e87-9dc5-adbf021d03cf 1071 0 2025-05-13 00:09:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.19 csi-node-driver-lgq75 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali56f36fb9d6b [] []}} ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.543 [INFO][3017] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.578 [INFO][3031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" HandleID="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.594 [INFO][3031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" HandleID="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000360ad0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.19", "pod":"csi-node-driver-lgq75", "timestamp":"2025-05-13 00:09:54.578788476 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.594 [INFO][3031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.594 [INFO][3031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.594 [INFO][3031] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.598 [INFO][3031] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.602 [INFO][3031] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.607 [INFO][3031] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.609 [INFO][3031] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.612 [INFO][3031] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.612 [INFO][3031] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.614 [INFO][3031] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9 May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.617 [INFO][3031] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.627 [INFO][3031] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.3/26] block=192.168.37.0/26 handle="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.627 [INFO][3031] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.3/26] handle="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" host="10.0.0.19" May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.627 [INFO][3031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:09:54.647762 containerd[1448]: 2025-05-13 00:09:54.627 [INFO][3031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.3/26] IPv6=[] ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" HandleID="k8s-pod-network.cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Workload="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.629 [INFO][3017] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--lgq75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4939c0f9-f198-4e87-9dc5-adbf021d03cf", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"csi-node-driver-lgq75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56f36fb9d6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.629 [INFO][3017] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.3/32] ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.629 [INFO][3017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56f36fb9d6b ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.634 [INFO][3017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.634 [INFO][3017] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-csi--node--driver--lgq75-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4939c0f9-f198-4e87-9dc5-adbf021d03cf", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9", Pod:"csi-node-driver-lgq75", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali56f36fb9d6b", MAC:"c6:51:ff:38:42:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:09:54.649544 containerd[1448]: 2025-05-13 00:09:54.644 [INFO][3017] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9" Namespace="calico-system" Pod="csi-node-driver-lgq75" WorkloadEndpoint="10.0.0.19-k8s-csi--node--driver--lgq75-eth0" May 13 00:09:54.704574 containerd[1448]: time="2025-05-13T00:09:54.704130731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:09:54.704574 containerd[1448]: time="2025-05-13T00:09:54.704211107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:09:54.704574 containerd[1448]: time="2025-05-13T00:09:54.704227831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:54.704574 containerd[1448]: time="2025-05-13T00:09:54.704314609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:09:54.728371 systemd[1]: Started cri-containerd-cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9.scope - libcontainer container cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9. May 13 00:09:54.738979 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:09:54.751094 containerd[1448]: time="2025-05-13T00:09:54.751046203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgq75,Uid:4939c0f9-f198-4e87-9dc5-adbf021d03cf,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9\"" May 13 00:09:55.130665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543976623.mount: Deactivated successfully. May 13 00:09:55.280597 kubelet[1743]: E0513 00:09:55.280551 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:56.100995 systemd-networkd[1389]: cali56f36fb9d6b: Gained IPv6LL May 13 00:09:56.212920 containerd[1448]: time="2025-05-13T00:09:56.212641846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:56.213629 containerd[1448]: time="2025-05-13T00:09:56.213575819Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 00:09:56.214238 containerd[1448]: time="2025-05-13T00:09:56.214177410Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:56.219691 containerd[1448]: time="2025-05-13T00:09:56.218085413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:56.219691 containerd[1448]: time="2025-05-13T00:09:56.219159252Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 5.210392368s" May 13 00:09:56.219691 containerd[1448]: time="2025-05-13T00:09:56.219207180Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 00:09:56.222755 containerd[1448]: time="2025-05-13T00:09:56.222694065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:09:56.224020 containerd[1448]: time="2025-05-13T00:09:56.223981303Z" level=info msg="CreateContainer within sandbox \"db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:09:56.236989 containerd[1448]: time="2025-05-13T00:09:56.236930418Z" level=info msg="CreateContainer within sandbox \"db3ea1a1435680b1d8e2a61173d5c859ea2dc5721c256526d507a9b24b315921\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"22958c858985d873e76859763a183cc6fae2253659da8755d8160e70595c7a86\"" May 13 00:09:56.237685 containerd[1448]: time="2025-05-13T00:09:56.237656633Z" level=info msg="StartContainer for \"22958c858985d873e76859763a183cc6fae2253659da8755d8160e70595c7a86\"" May 13 00:09:56.263400 systemd[1]: Started cri-containerd-22958c858985d873e76859763a183cc6fae2253659da8755d8160e70595c7a86.scope - libcontainer container 22958c858985d873e76859763a183cc6fae2253659da8755d8160e70595c7a86. May 13 00:09:56.281659 kubelet[1743]: E0513 00:09:56.281618 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:56.283167 containerd[1448]: time="2025-05-13T00:09:56.283133883Z" level=info msg="StartContainer for \"22958c858985d873e76859763a183cc6fae2253659da8755d8160e70595c7a86\" returns successfully" May 13 00:09:56.486645 kubelet[1743]: I0513 00:09:56.486335 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.272629068 podStartE2EDuration="6.486321342s" podCreationTimestamp="2025-05-13 00:09:50 +0000 UTC" firstStartedPulling="2025-05-13 00:09:51.008530184 +0000 UTC m=+21.152506626" lastFinishedPulling="2025-05-13 00:09:56.222222418 +0000 UTC m=+26.366198900" observedRunningTime="2025-05-13 00:09:56.485830091 +0000 UTC m=+26.629806533" watchObservedRunningTime="2025-05-13 00:09:56.486321342 +0000 UTC m=+26.630297744" May 13 00:09:57.204918 containerd[1448]: time="2025-05-13T00:09:57.204855235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:57.205811 containerd[1448]: time="2025-05-13T00:09:57.205778635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:09:57.206903 containerd[1448]: time="2025-05-13T00:09:57.206867784Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:57.219255 containerd[1448]: time="2025-05-13T00:09:57.219207364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:57.220135 containerd[1448]: time="2025-05-13T00:09:57.219796546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 997.038869ms" May 13 00:09:57.220135 containerd[1448]: time="2025-05-13T00:09:57.219832512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:09:57.221882 containerd[1448]: time="2025-05-13T00:09:57.221661749Z" level=info msg="CreateContainer within sandbox \"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:09:57.233630 containerd[1448]: time="2025-05-13T00:09:57.233584936Z" level=info msg="CreateContainer within sandbox \"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807\"" May 13 00:09:57.234548 containerd[1448]: time="2025-05-13T00:09:57.234285258Z" level=info msg="StartContainer for \"e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807\"" May 13 00:09:57.254043 systemd[1]: run-containerd-runc-k8s.io-e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807-runc.sNcSNw.mount: Deactivated successfully. May 13 00:09:57.264380 systemd[1]: Started cri-containerd-e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807.scope - libcontainer container e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807. May 13 00:09:57.282269 kubelet[1743]: E0513 00:09:57.282178 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:57.342804 containerd[1448]: time="2025-05-13T00:09:57.342305587Z" level=info msg="StartContainer for \"e83e03b74f7e97c09a1956d1bb0cc7c4e637f54d0716e508b49f7a6b62532807\" returns successfully" May 13 00:09:57.343644 containerd[1448]: time="2025-05-13T00:09:57.343576688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:09:58.236517 containerd[1448]: time="2025-05-13T00:09:58.236337234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:58.237295 containerd[1448]: time="2025-05-13T00:09:58.237204575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:09:58.238287 containerd[1448]: time="2025-05-13T00:09:58.238253945Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:58.240596 containerd[1448]: time="2025-05-13T00:09:58.240547358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:09:58.241320 containerd[1448]: time="2025-05-13T00:09:58.241279837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 897.668744ms" May 13 00:09:58.241320 containerd[1448]: time="2025-05-13T00:09:58.241317883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:09:58.243230 containerd[1448]: time="2025-05-13T00:09:58.243201190Z" level=info msg="CreateContainer within sandbox \"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:09:58.259820 containerd[1448]: time="2025-05-13T00:09:58.259732437Z" level=info msg="CreateContainer within sandbox \"cd4393072b413c0b057d3c8b2fd0eadfad028fb7959618452057ccaa9dd676c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"59d052cda60729702428ec9b2a0cc9b659791c76a2d47bfb0d3b9dd97f1f0f9b\"" May 13 00:09:58.260523 containerd[1448]: time="2025-05-13T00:09:58.260275205Z" level=info msg="StartContainer for \"59d052cda60729702428ec9b2a0cc9b659791c76a2d47bfb0d3b9dd97f1f0f9b\"" May 13 00:09:58.282773 kubelet[1743]: E0513 00:09:58.282533 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:09:58.297355 systemd[1]: Started cri-containerd-59d052cda60729702428ec9b2a0cc9b659791c76a2d47bfb0d3b9dd97f1f0f9b.scope - libcontainer container 59d052cda60729702428ec9b2a0cc9b659791c76a2d47bfb0d3b9dd97f1f0f9b. May 13 00:09:58.320478 containerd[1448]: time="2025-05-13T00:09:58.320426783Z" level=info msg="StartContainer for \"59d052cda60729702428ec9b2a0cc9b659791c76a2d47bfb0d3b9dd97f1f0f9b\" returns successfully" May 13 00:09:58.418084 kubelet[1743]: I0513 00:09:58.418035 1743 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:09:58.421645 kubelet[1743]: I0513 00:09:58.421621 1743 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:09:58.496361 kubelet[1743]: I0513 00:09:58.496168 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgq75" podStartSLOduration=24.00634484 podStartE2EDuration="27.496151788s" podCreationTimestamp="2025-05-13 00:09:31 +0000 UTC" firstStartedPulling="2025-05-13 00:09:54.752243495 +0000 UTC m=+24.896219937" lastFinishedPulling="2025-05-13 00:09:58.242050443 +0000 UTC m=+28.386026885" observedRunningTime="2025-05-13 00:09:58.495694273 +0000 UTC m=+28.639670715" watchObservedRunningTime="2025-05-13 00:09:58.496151788 +0000 UTC m=+28.640128230" May 13 00:09:59.283195 kubelet[1743]: E0513 00:09:59.283146 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:00.284095 kubelet[1743]: E0513 00:10:00.284051 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:01.284895 kubelet[1743]: E0513 00:10:01.284855 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:02.285774 kubelet[1743]: E0513 00:10:02.285711 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:03.059824 systemd[1]: Created slice kubepods-besteffort-pod87889253_a387_4981_92de_981d06c2c00e.slice - libcontainer container kubepods-besteffort-pod87889253_a387_4981_92de_981d06c2c00e.slice. May 13 00:10:03.263174 kubelet[1743]: I0513 00:10:03.263080 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6374da73-77bc-4cd4-b96e-54138b5353ea\" (UniqueName: \"kubernetes.io/nfs/87889253-a387-4981-92de-981d06c2c00e-pvc-6374da73-77bc-4cd4-b96e-54138b5353ea\") pod \"test-pod-1\" (UID: \"87889253-a387-4981-92de-981d06c2c00e\") " pod="default/test-pod-1" May 13 00:10:03.263174 kubelet[1743]: I0513 00:10:03.263134 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmhxz\" (UniqueName: \"kubernetes.io/projected/87889253-a387-4981-92de-981d06c2c00e-kube-api-access-qmhxz\") pod \"test-pod-1\" (UID: \"87889253-a387-4981-92de-981d06c2c00e\") " pod="default/test-pod-1" May 13 00:10:03.286897 kubelet[1743]: E0513 00:10:03.286835 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:03.387209 kernel: FS-Cache: Loaded May 13 00:10:03.411642 kernel: RPC: Registered named UNIX socket transport module. May 13 00:10:03.411743 kernel: RPC: Registered udp transport module. May 13 00:10:03.411757 kernel: RPC: Registered tcp transport module. May 13 00:10:03.412813 kernel: RPC: Registered tcp-with-tls transport module. May 13 00:10:03.413447 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:10:03.593427 kernel: NFS: Registering the id_resolver key type May 13 00:10:03.593545 kernel: Key type id_resolver registered May 13 00:10:03.593584 kernel: Key type id_legacy registered May 13 00:10:03.621003 nfsidmap[3300]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:10:03.624432 nfsidmap[3303]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:10:03.663260 containerd[1448]: time="2025-05-13T00:10:03.662834621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:87889253-a387-4981-92de-981d06c2c00e,Namespace:default,Attempt:0,}" May 13 00:10:03.781830 systemd-networkd[1389]: cali5ec59c6bf6e: Link UP May 13 00:10:03.782067 systemd-networkd[1389]: cali5ec59c6bf6e: Gained carrier May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.703 [INFO][3306] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.19-k8s-test--pod--1-eth0 default 87889253-a387-4981-92de-981d06c2c00e 1121 0 2025-05-13 00:09:50 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.19 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.704 [INFO][3306] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.730 [INFO][3320] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" HandleID="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Workload="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.743 [INFO][3320] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" HandleID="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Workload="10.0.0.19-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003094e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.19", "pod":"test-pod-1", "timestamp":"2025-05-13 00:10:03.730812024 +0000 UTC"}, Hostname:"10.0.0.19", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.743 [INFO][3320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.743 [INFO][3320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.743 [INFO][3320] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.19' May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.745 [INFO][3320] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.749 [INFO][3320] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.753 [INFO][3320] ipam/ipam.go 489: Trying affinity for 192.168.37.0/26 host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.758 [INFO][3320] ipam/ipam.go 155: Attempting to load block cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.760 [INFO][3320] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.37.0/26 host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.761 [INFO][3320] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.37.0/26 handle="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.762 [INFO][3320] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.771 [INFO][3320] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.37.0/26 handle="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.777 [INFO][3320] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.37.4/26] block=192.168.37.0/26 handle="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.777 [INFO][3320] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.37.4/26] handle="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" host="10.0.0.19" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.777 [INFO][3320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.777 [INFO][3320] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.37.4/26] IPv6=[] ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" HandleID="k8s-pod-network.c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Workload="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.779 [INFO][3306] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"87889253-a387-4981-92de-981d06c2c00e", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:03.792295 containerd[1448]: 2025-05-13 00:10:03.779 [INFO][3306] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.37.4/32] ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792967 containerd[1448]: 2025-05-13 00:10:03.779 [INFO][3306] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792967 containerd[1448]: 2025-05-13 00:10:03.782 [INFO][3306] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.792967 containerd[1448]: 2025-05-13 00:10:03.782 [INFO][3306] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.19-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"87889253-a387-4981-92de-981d06c2c00e", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.19", ContainerID:"c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.37.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"8e:a4:21:bd:7a:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:10:03.792967 containerd[1448]: 2025-05-13 00:10:03.790 [INFO][3306] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.19-k8s-test--pod--1-eth0" May 13 00:10:03.830898 containerd[1448]: time="2025-05-13T00:10:03.830642896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:10:03.830898 containerd[1448]: time="2025-05-13T00:10:03.830722506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:10:03.830898 containerd[1448]: time="2025-05-13T00:10:03.830734227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:10:03.830898 containerd[1448]: time="2025-05-13T00:10:03.830825078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:10:03.852367 systemd[1]: Started cri-containerd-c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde.scope - libcontainer container c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde. May 13 00:10:03.863080 systemd-resolved[1312]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:10:03.878344 containerd[1448]: time="2025-05-13T00:10:03.878300227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:87889253-a387-4981-92de-981d06c2c00e,Namespace:default,Attempt:0,} returns sandbox id \"c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde\"" May 13 00:10:03.880024 containerd[1448]: time="2025-05-13T00:10:03.880001187Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:10:04.120310 containerd[1448]: time="2025-05-13T00:10:04.120192553Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:10:04.120849 containerd[1448]: time="2025-05-13T00:10:04.120816742Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 00:10:04.124873 containerd[1448]: time="2025-05-13T00:10:04.124534993Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 244.407471ms" May 13 00:10:04.124873 containerd[1448]: time="2025-05-13T00:10:04.124573317Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:10:04.128814 containerd[1448]: time="2025-05-13T00:10:04.128783381Z" level=info msg="CreateContainer within sandbox \"c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:10:04.152603 containerd[1448]: time="2025-05-13T00:10:04.152533363Z" level=info msg="CreateContainer within sandbox \"c3099b1e30b3b4a87b17a2a9753303fbfb34ee518e1f63f1b4915d1f04cdbbde\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"89994873e6925149dfb7efac17dacce68457140492e861c0d8480d346cde2bac\"" May 13 00:10:04.153094 containerd[1448]: time="2025-05-13T00:10:04.153040059Z" level=info msg="StartContainer for \"89994873e6925149dfb7efac17dacce68457140492e861c0d8480d346cde2bac\"" May 13 00:10:04.185380 systemd[1]: Started cri-containerd-89994873e6925149dfb7efac17dacce68457140492e861c0d8480d346cde2bac.scope - libcontainer container 89994873e6925149dfb7efac17dacce68457140492e861c0d8480d346cde2bac. May 13 00:10:04.205723 containerd[1448]: time="2025-05-13T00:10:04.205609181Z" level=info msg="StartContainer for \"89994873e6925149dfb7efac17dacce68457140492e861c0d8480d346cde2bac\" returns successfully" May 13 00:10:04.287218 kubelet[1743]: E0513 00:10:04.287164 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:04.507845 kubelet[1743]: I0513 00:10:04.507410 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.259517831 podStartE2EDuration="14.507392248s" podCreationTimestamp="2025-05-13 00:09:50 +0000 UTC" firstStartedPulling="2025-05-13 00:10:03.879670588 +0000 UTC m=+34.023647030" lastFinishedPulling="2025-05-13 00:10:04.127545005 +0000 UTC m=+34.271521447" observedRunningTime="2025-05-13 00:10:04.507293437 +0000 UTC m=+34.651269879" watchObservedRunningTime="2025-05-13 00:10:04.507392248 +0000 UTC m=+34.651368690" May 13 00:10:05.252390 systemd-networkd[1389]: cali5ec59c6bf6e: Gained IPv6LL May 13 00:10:05.287887 kubelet[1743]: E0513 00:10:05.287841 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:06.087370 update_engine[1427]: I20250513 00:10:06.087288 1427 update_attempter.cc:509] Updating boot flags... May 13 00:10:06.112229 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3295) May 13 00:10:06.139824 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3452) May 13 00:10:06.168240 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3452) May 13 00:10:06.288438 kubelet[1743]: E0513 00:10:06.288349 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:07.288572 kubelet[1743]: E0513 00:10:07.288518 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:10:08.288713 kubelet[1743]: E0513 00:10:08.288618 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"