May 13 00:03:12.927609 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:03:12.927630 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon May 12 22:21:23 -00 2025 May 13 00:03:12.927640 kernel: KASLR enabled May 13 00:03:12.927646 kernel: efi: EFI v2.7 by EDK II May 13 00:03:12.927652 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 13 00:03:12.927658 kernel: random: crng init done May 13 00:03:12.927665 kernel: secureboot: Secure boot disabled May 13 00:03:12.927671 kernel: ACPI: Early table checksum verification disabled May 13 00:03:12.927677 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 13 00:03:12.927684 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:03:12.927691 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927697 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927703 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927709 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927716 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927724 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927731 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927737 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927743 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:03:12.927750 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:03:12.927756 kernel: NUMA: Failed to initialise from firmware May 13 00:03:12.927763 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:12.927769 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] May 13 00:03:12.927775 kernel: Zone ranges: May 13 00:03:12.927781 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:12.927789 kernel: DMA32 empty May 13 00:03:12.927795 kernel: Normal empty May 13 00:03:12.927801 kernel: Movable zone start for each node May 13 00:03:12.927808 kernel: Early memory node ranges May 13 00:03:12.927814 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:03:12.927821 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:03:12.927827 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:03:12.927833 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:03:12.927840 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:03:12.927846 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:03:12.927853 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:03:12.927859 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:03:12.927867 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:03:12.927873 kernel: psci: probing for conduit method from ACPI. May 13 00:03:12.927880 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:03:12.927889 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:03:12.927896 kernel: psci: Trusted OS migration not required May 13 00:03:12.927903 kernel: psci: SMC Calling Convention v1.1 May 13 00:03:12.927912 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:03:12.927919 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:03:12.927925 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:03:12.927932 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:03:12.927940 kernel: Detected PIPT I-cache on CPU0 May 13 00:03:12.927949 kernel: CPU features: detected: GIC system register CPU interface May 13 00:03:12.927957 kernel: CPU features: detected: Hardware dirty bit management May 13 00:03:12.927965 kernel: CPU features: detected: Spectre-v4 May 13 00:03:12.927972 kernel: CPU features: detected: Spectre-BHB May 13 00:03:12.927979 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:03:12.927988 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:03:12.927995 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:03:12.928002 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:03:12.928009 kernel: alternatives: applying boot alternatives May 13 00:03:12.928017 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 13 00:03:12.928024 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:03:12.928038 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:03:12.928045 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:03:12.928051 kernel: Fallback order for Node 0: 0 May 13 00:03:12.928058 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:03:12.928065 kernel: Policy zone: DMA May 13 00:03:12.928074 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:03:12.928082 kernel: software IO TLB: area num 4. May 13 00:03:12.928088 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:03:12.928096 kernel: Memory: 2386256K/2572288K available (10240K kernel code, 2186K rwdata, 8104K rodata, 39744K init, 897K bss, 186032K reserved, 0K cma-reserved) May 13 00:03:12.928103 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:03:12.928110 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:03:12.928117 kernel: rcu: RCU event tracing is enabled. May 13 00:03:12.928124 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:03:12.928131 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:03:12.928138 kernel: Tracing variant of Tasks RCU enabled. May 13 00:03:12.928153 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:03:12.928160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:03:12.928181 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:03:12.928189 kernel: GICv3: 256 SPIs implemented May 13 00:03:12.928196 kernel: GICv3: 0 Extended SPIs implemented May 13 00:03:12.928203 kernel: Root IRQ handler: gic_handle_irq May 13 00:03:12.928210 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:03:12.928216 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:03:12.928223 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:03:12.928230 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:03:12.928237 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:03:12.928244 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:03:12.928252 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:03:12.928260 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:03:12.928267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:12.928274 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:03:12.928281 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:03:12.928288 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:03:12.928295 kernel: arm-pv: using stolen time PV May 13 00:03:12.928302 kernel: Console: colour dummy device 80x25 May 13 00:03:12.928309 kernel: ACPI: Core revision 20230628 May 13 00:03:12.928316 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:03:12.928324 kernel: pid_max: default: 32768 minimum: 301 May 13 00:03:12.928332 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:03:12.928339 kernel: landlock: Up and running. May 13 00:03:12.928346 kernel: SELinux: Initializing. May 13 00:03:12.928353 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:03:12.928360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:03:12.928367 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:03:12.928375 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:03:12.928382 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:03:12.928389 kernel: rcu: Hierarchical SRCU implementation. May 13 00:03:12.928397 kernel: rcu: Max phase no-delay instances is 400. May 13 00:03:12.928404 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:03:12.928411 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:03:12.928418 kernel: Remapping and enabling EFI services. May 13 00:03:12.928425 kernel: smp: Bringing up secondary CPUs ... May 13 00:03:12.928432 kernel: Detected PIPT I-cache on CPU1 May 13 00:03:12.928439 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:03:12.928446 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:03:12.928453 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:12.928460 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:03:12.928468 kernel: Detected PIPT I-cache on CPU2 May 13 00:03:12.928475 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:03:12.928488 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:03:12.928496 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:12.928503 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:03:12.928510 kernel: Detected PIPT I-cache on CPU3 May 13 00:03:12.928518 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:03:12.928525 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:03:12.928532 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:03:12.928540 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:03:12.928548 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:03:12.928556 kernel: SMP: Total of 4 processors activated. May 13 00:03:12.928563 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:03:12.928570 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:03:12.928578 kernel: CPU features: detected: Common not Private translations May 13 00:03:12.928585 kernel: CPU features: detected: CRC32 instructions May 13 00:03:12.928592 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:03:12.928601 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:03:12.928608 kernel: CPU features: detected: LSE atomic instructions May 13 00:03:12.928616 kernel: CPU features: detected: Privileged Access Never May 13 00:03:12.928623 kernel: CPU features: detected: RAS Extension Support May 13 00:03:12.928630 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:03:12.928637 kernel: CPU: All CPU(s) started at EL1 May 13 00:03:12.928645 kernel: alternatives: applying system-wide alternatives May 13 00:03:12.928652 kernel: devtmpfs: initialized May 13 00:03:12.928659 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:03:12.928668 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:03:12.928676 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:03:12.928683 kernel: SMBIOS 3.0.0 present. May 13 00:03:12.928690 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 13 00:03:12.928698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:03:12.928705 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:03:12.928713 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:03:12.928720 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:03:12.928727 kernel: audit: initializing netlink subsys (disabled) May 13 00:03:12.928736 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 May 13 00:03:12.928743 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:03:12.928750 kernel: cpuidle: using governor menu May 13 00:03:12.928758 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:03:12.928765 kernel: ASID allocator initialised with 32768 entries May 13 00:03:12.928772 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:03:12.928780 kernel: Serial: AMBA PL011 UART driver May 13 00:03:12.928787 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:03:12.928794 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:03:12.928803 kernel: Modules: 508944 pages in range for PLT usage May 13 00:03:12.928810 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:03:12.928818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:03:12.928825 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:03:12.928833 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:03:12.928840 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:03:12.928848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:03:12.928855 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:03:12.928863 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:03:12.928872 kernel: ACPI: Added _OSI(Module Device) May 13 00:03:12.928879 kernel: ACPI: Added _OSI(Processor Device) May 13 00:03:12.928886 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:03:12.928894 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:03:12.928901 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:03:12.928908 kernel: ACPI: Interpreter enabled May 13 00:03:12.928916 kernel: ACPI: Using GIC for interrupt routing May 13 00:03:12.928923 kernel: ACPI: MCFG table detected, 1 entries May 13 00:03:12.928930 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:03:12.928939 kernel: printk: console [ttyAMA0] enabled May 13 00:03:12.928946 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:03:12.929100 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:03:12.929208 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:03:12.929275 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:03:12.929345 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:03:12.929414 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:03:12.929426 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:03:12.929434 kernel: PCI host bridge to bus 0000:00 May 13 00:03:12.929513 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:03:12.929600 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:03:12.929662 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:03:12.929720 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:03:12.929803 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:03:12.929889 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:03:12.929957 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:03:12.930024 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:03:12.930101 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:03:12.930183 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:03:12.930251 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:03:12.930327 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:03:12.930394 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:03:12.930453 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:03:12.930511 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:03:12.930521 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:03:12.930528 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:03:12.930536 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:03:12.930543 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:03:12.930553 kernel: iommu: Default domain type: Translated May 13 00:03:12.930560 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:03:12.930567 kernel: efivars: Registered efivars operations May 13 00:03:12.930575 kernel: vgaarb: loaded May 13 00:03:12.930582 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:03:12.930589 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:03:12.930597 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:03:12.930604 kernel: pnp: PnP ACPI init May 13 00:03:12.930677 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:03:12.930689 kernel: pnp: PnP ACPI: found 1 devices May 13 00:03:12.930696 kernel: NET: Registered PF_INET protocol family May 13 00:03:12.930704 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:03:12.930711 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:03:12.930719 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:03:12.930731 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:03:12.930738 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:03:12.930746 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:03:12.930754 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:03:12.930763 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:03:12.930770 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:03:12.930778 kernel: PCI: CLS 0 bytes, default 64 May 13 00:03:12.930785 kernel: kvm [1]: HYP mode not available May 13 00:03:12.930793 kernel: Initialise system trusted keyrings May 13 00:03:12.930800 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:03:12.930808 kernel: Key type asymmetric registered May 13 00:03:12.930815 kernel: Asymmetric key parser 'x509' registered May 13 00:03:12.930823 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:03:12.930832 kernel: io scheduler mq-deadline registered May 13 00:03:12.930839 kernel: io scheduler kyber registered May 13 00:03:12.930847 kernel: io scheduler bfq registered May 13 00:03:12.930854 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:03:12.930862 kernel: ACPI: button: Power Button [PWRB] May 13 00:03:12.930870 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:03:12.930938 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:03:12.930949 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:03:12.930957 kernel: thunder_xcv, ver 1.0 May 13 00:03:12.930966 kernel: thunder_bgx, ver 1.0 May 13 00:03:12.930974 kernel: nicpf, ver 1.0 May 13 00:03:12.930981 kernel: nicvf, ver 1.0 May 13 00:03:12.931063 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:03:12.931132 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:03:12 UTC (1747094592) May 13 00:03:12.931142 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:03:12.931163 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:03:12.931170 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:03:12.931181 kernel: watchdog: Hard watchdog permanently disabled May 13 00:03:12.931188 kernel: NET: Registered PF_INET6 protocol family May 13 00:03:12.931195 kernel: Segment Routing with IPv6 May 13 00:03:12.931205 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:03:12.931213 kernel: NET: Registered PF_PACKET protocol family May 13 00:03:12.931220 kernel: Key type dns_resolver registered May 13 00:03:12.931227 kernel: registered taskstats version 1 May 13 00:03:12.931235 kernel: Loading compiled-in X.509 certificates May 13 00:03:12.931242 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: f172f0fb4eac06c214e4b9ce0f39d6c4075ccc9a' May 13 00:03:12.931251 kernel: Key type .fscrypt registered May 13 00:03:12.931258 kernel: Key type fscrypt-provisioning registered May 13 00:03:12.931266 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:03:12.931273 kernel: ima: Allocated hash algorithm: sha1 May 13 00:03:12.931280 kernel: ima: No architecture policies found May 13 00:03:12.931287 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:03:12.931295 kernel: clk: Disabling unused clocks May 13 00:03:12.931302 kernel: Freeing unused kernel memory: 39744K May 13 00:03:12.931309 kernel: Run /init as init process May 13 00:03:12.931318 kernel: with arguments: May 13 00:03:12.931326 kernel: /init May 13 00:03:12.931333 kernel: with environment: May 13 00:03:12.931340 kernel: HOME=/ May 13 00:03:12.931347 kernel: TERM=linux May 13 00:03:12.931354 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:03:12.931363 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:03:12.931373 systemd[1]: Detected virtualization kvm. May 13 00:03:12.931383 systemd[1]: Detected architecture arm64. May 13 00:03:12.931390 systemd[1]: Running in initrd. May 13 00:03:12.931398 systemd[1]: No hostname configured, using default hostname. May 13 00:03:12.931406 systemd[1]: Hostname set to . May 13 00:03:12.931414 systemd[1]: Initializing machine ID from VM UUID. May 13 00:03:12.931421 systemd[1]: Queued start job for default target initrd.target. May 13 00:03:12.931429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:12.931437 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:12.931447 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:03:12.931455 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:03:12.931463 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:03:12.931471 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:03:12.931480 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:03:12.931488 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:03:12.931498 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:12.931506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:12.931514 systemd[1]: Reached target paths.target - Path Units. May 13 00:03:12.931522 systemd[1]: Reached target slices.target - Slice Units. May 13 00:03:12.931530 systemd[1]: Reached target swap.target - Swaps. May 13 00:03:12.931538 systemd[1]: Reached target timers.target - Timer Units. May 13 00:03:12.931546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:03:12.931554 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:03:12.931562 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:03:12.931571 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:03:12.931579 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:12.931587 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:03:12.931595 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:12.931603 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:03:12.931611 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:03:12.931619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:03:12.931627 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:03:12.931636 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:03:12.931644 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:03:12.931652 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:03:12.931660 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:12.931668 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:03:12.931676 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:12.931683 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:03:12.931693 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:03:12.931702 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:12.931710 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:12.931735 systemd-journald[239]: Collecting audit messages is disabled. May 13 00:03:12.931757 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:12.931766 systemd-journald[239]: Journal started May 13 00:03:12.931785 systemd-journald[239]: Runtime Journal (/run/log/journal/35535143d89944e5a20bb386650e068f) is 5.9M, max 47.3M, 41.4M free. May 13 00:03:12.913454 systemd-modules-load[240]: Inserted module 'overlay' May 13 00:03:12.936689 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:03:12.936733 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:03:12.940303 systemd-modules-load[240]: Inserted module 'br_netfilter' May 13 00:03:12.941625 kernel: Bridge firewalling registered May 13 00:03:12.941045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:03:12.943429 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:03:12.945009 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:03:12.949764 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:03:12.951170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:12.956380 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:03:12.959171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:12.960664 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:12.971395 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:12.978169 dracut-cmdline[268]: dracut-dracut-053 May 13 00:03:12.981921 dracut-cmdline[268]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e3fb02dca379a9c7f05d94ae800dbbcafb80c81ea68c8486d0613b136c5c38d4 May 13 00:03:12.981369 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:03:13.011787 systemd-resolved[282]: Positive Trust Anchors: May 13 00:03:13.011862 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:03:13.011893 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:03:13.017430 systemd-resolved[282]: Defaulting to hostname 'linux'. May 13 00:03:13.018611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:03:13.020406 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:13.065183 kernel: SCSI subsystem initialized May 13 00:03:13.071264 kernel: Loading iSCSI transport class v2.0-870. May 13 00:03:13.080226 kernel: iscsi: registered transport (tcp) May 13 00:03:13.093408 kernel: iscsi: registered transport (qla4xxx) May 13 00:03:13.093492 kernel: QLogic iSCSI HBA Driver May 13 00:03:13.143182 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:03:13.156334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:03:13.175984 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:03:13.177217 kernel: device-mapper: uevent: version 1.0.3 May 13 00:03:13.177231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:03:13.228190 kernel: raid6: neonx8 gen() 15145 MB/s May 13 00:03:13.245240 kernel: raid6: neonx4 gen() 12972 MB/s May 13 00:03:13.262169 kernel: raid6: neonx2 gen() 11686 MB/s May 13 00:03:13.279178 kernel: raid6: neonx1 gen() 9877 MB/s May 13 00:03:13.296167 kernel: raid6: int64x8 gen() 6836 MB/s May 13 00:03:13.313167 kernel: raid6: int64x4 gen() 7289 MB/s May 13 00:03:13.330163 kernel: raid6: int64x2 gen() 6099 MB/s May 13 00:03:13.347163 kernel: raid6: int64x1 gen() 5043 MB/s May 13 00:03:13.347177 kernel: raid6: using algorithm neonx8 gen() 15145 MB/s May 13 00:03:13.364171 kernel: raid6: .... xor() 11605 MB/s, rmw enabled May 13 00:03:13.364186 kernel: raid6: using neon recovery algorithm May 13 00:03:13.369367 kernel: xor: measuring software checksum speed May 13 00:03:13.369389 kernel: 8regs : 19826 MB/sec May 13 00:03:13.370460 kernel: 32regs : 19660 MB/sec May 13 00:03:13.370473 kernel: arm64_neon : 26005 MB/sec May 13 00:03:13.370482 kernel: xor: using function: arm64_neon (26005 MB/sec) May 13 00:03:13.422200 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:03:13.432997 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:03:13.445341 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:13.457847 systemd-udevd[459]: Using default interface naming scheme 'v255'. May 13 00:03:13.461081 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:13.467309 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:03:13.481131 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation May 13 00:03:13.510019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:03:13.520336 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:03:13.564212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:13.574589 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:03:13.587054 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:03:13.588864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:03:13.590374 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:13.592276 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:03:13.601389 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:03:13.613850 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:03:13.618665 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:03:13.625112 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:03:13.623132 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:03:13.623261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:13.626532 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:13.635234 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:03:13.635256 kernel: GPT:9289727 != 19775487 May 13 00:03:13.635272 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:03:13.635282 kernel: GPT:9289727 != 19775487 May 13 00:03:13.635293 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:03:13.635302 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:13.627707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:03:13.627907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:13.629556 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:13.639424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:13.652544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:13.657154 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) May 13 00:03:13.657180 kernel: BTRFS: device fsid 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (517) May 13 00:03:13.662261 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:03:13.669178 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:03:13.672854 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:03:13.673844 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:03:13.679945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:03:13.693339 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:03:13.695010 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:03:13.699250 disk-uuid[549]: Primary Header is updated. May 13 00:03:13.699250 disk-uuid[549]: Secondary Entries is updated. May 13 00:03:13.699250 disk-uuid[549]: Secondary Header is updated. May 13 00:03:13.703173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:13.720222 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:14.717558 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:03:14.718223 disk-uuid[550]: The operation has completed successfully. May 13 00:03:14.741254 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:03:14.741378 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:03:14.762311 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:03:14.766045 sh[570]: Success May 13 00:03:14.780968 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:03:14.808127 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:03:14.823645 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:03:14.825715 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:03:14.834858 kernel: BTRFS info (device dm-0): first mount of filesystem 8bc7e2dd-1c9f-4f38-9a4f-4a4a9806cb3a May 13 00:03:14.834893 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:14.834904 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:03:14.836311 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:03:14.836329 kernel: BTRFS info (device dm-0): using free space tree May 13 00:03:14.840289 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:03:14.842493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:03:14.851286 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:03:14.852633 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:03:14.859303 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:14.859345 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:14.859356 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:14.862183 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:14.869394 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:03:14.870676 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:14.875852 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:03:14.883601 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:03:14.948339 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:03:14.965322 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:03:14.986886 systemd-networkd[762]: lo: Link UP May 13 00:03:14.986898 systemd-networkd[762]: lo: Gained carrier May 13 00:03:14.988071 systemd-networkd[762]: Enumeration completed May 13 00:03:14.989215 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:14.989218 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:03:14.990663 systemd-networkd[762]: eth0: Link UP May 13 00:03:14.990685 systemd-networkd[762]: eth0: Gained carrier May 13 00:03:14.990692 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:14.996085 ignition[659]: Ignition 2.20.0 May 13 00:03:14.992090 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:03:14.996091 ignition[659]: Stage: fetch-offline May 13 00:03:14.993422 systemd[1]: Reached target network.target - Network. May 13 00:03:14.996122 ignition[659]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:14.996130 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:14.996293 ignition[659]: parsed url from cmdline: "" May 13 00:03:14.996296 ignition[659]: no config URL provided May 13 00:03:14.996301 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:03:14.996308 ignition[659]: no config at "/usr/lib/ignition/user.ign" May 13 00:03:14.996334 ignition[659]: op(1): [started] loading QEMU firmware config module May 13 00:03:14.996338 ignition[659]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:03:15.004072 ignition[659]: op(1): [finished] loading QEMU firmware config module May 13 00:03:15.011008 ignition[659]: parsing config with SHA512: 5c5d7777cae3da8d0acfa90a0cf4b9fdb8b4f2e8a7419a2d3cf27efe0e34aaafcee6459df24f9075c895d81bb88acad0ded871e4df10061ef7cab38860e1e8fb May 13 00:03:15.013879 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:03:15.014457 unknown[659]: fetched base config from "system" May 13 00:03:15.014740 ignition[659]: fetch-offline: fetch-offline passed May 13 00:03:15.014464 unknown[659]: fetched user config from "qemu" May 13 00:03:15.014806 ignition[659]: Ignition finished successfully May 13 00:03:15.018185 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:03:15.019492 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:03:15.024362 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:03:15.034831 ignition[769]: Ignition 2.20.0 May 13 00:03:15.034843 ignition[769]: Stage: kargs May 13 00:03:15.035003 ignition[769]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:15.035012 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:15.035789 ignition[769]: kargs: kargs passed May 13 00:03:15.035835 ignition[769]: Ignition finished successfully May 13 00:03:15.038998 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:03:15.052336 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:03:15.061854 ignition[778]: Ignition 2.20.0 May 13 00:03:15.061864 ignition[778]: Stage: disks May 13 00:03:15.062034 ignition[778]: no configs at "/usr/lib/ignition/base.d" May 13 00:03:15.062043 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:15.062824 ignition[778]: disks: disks passed May 13 00:03:15.064504 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:03:15.062869 ignition[778]: Ignition finished successfully May 13 00:03:15.066285 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:03:15.067623 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:03:15.069283 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:03:15.070643 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:03:15.072263 systemd[1]: Reached target basic.target - Basic System. May 13 00:03:15.084316 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:03:15.095541 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:03:15.098947 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:03:15.101225 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:03:15.149180 kernel: EXT4-fs (vda9): mounted filesystem 267e1a87-2243-4e28-a518-ba9876b017ec r/w with ordered data mode. Quota mode: none. May 13 00:03:15.149454 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:03:15.150574 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:03:15.165256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:03:15.167597 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:03:15.168511 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:03:15.168554 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:03:15.168575 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:03:15.174732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:03:15.177062 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:03:15.182677 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (798) May 13 00:03:15.182706 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:15.182745 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:15.182758 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:15.182767 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:15.193176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:03:15.246611 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:03:15.250495 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory May 13 00:03:15.253911 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:03:15.257509 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:03:15.331660 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:03:15.340257 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:03:15.341744 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:03:15.348215 kernel: BTRFS info (device vda6): last unmount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:15.362478 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:03:15.366143 ignition[912]: INFO : Ignition 2.20.0 May 13 00:03:15.366143 ignition[912]: INFO : Stage: mount May 13 00:03:15.367849 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:15.367849 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:15.367849 ignition[912]: INFO : mount: mount passed May 13 00:03:15.367849 ignition[912]: INFO : Ignition finished successfully May 13 00:03:15.368389 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:03:15.374284 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:03:15.834212 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:03:15.843328 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:03:15.849953 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 13 00:03:15.849996 kernel: BTRFS info (device vda6): first mount of filesystem f5ecb074-2ad7-499c-bb35-c3ab71cda02a May 13 00:03:15.850008 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:03:15.851202 kernel: BTRFS info (device vda6): using free space tree May 13 00:03:15.853163 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:03:15.854285 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:03:15.870731 ignition[942]: INFO : Ignition 2.20.0 May 13 00:03:15.870731 ignition[942]: INFO : Stage: files May 13 00:03:15.872462 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:15.872462 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:15.872462 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 13 00:03:15.875880 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:03:15.875880 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:03:15.878919 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:03:15.880292 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:03:15.880292 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:03:15.879525 unknown[942]: wrote ssh authorized keys file for user: core May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:03:15.884070 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:03:16.229252 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 13 00:03:16.597283 systemd-networkd[762]: eth0: Gained IPv6LL May 13 00:03:16.625847 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:03:16.625847 ignition[942]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 13 00:03:16.629738 ignition[942]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:03:16.632291 ignition[942]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:03:16.632291 ignition[942]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 13 00:03:16.632291 ignition[942]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:03:16.672252 ignition[942]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:03:16.677714 ignition[942]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:03:16.677714 ignition[942]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:03:16.677714 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:03:16.677714 ignition[942]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:03:16.677714 ignition[942]: INFO : files: files passed May 13 00:03:16.677714 ignition[942]: INFO : Ignition finished successfully May 13 00:03:16.678942 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:03:16.690346 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:03:16.692368 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:03:16.695318 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:03:16.695424 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:03:16.699235 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:03:16.702086 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:16.702086 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:16.704365 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:03:16.703652 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:03:16.705936 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:03:16.724378 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:03:16.745972 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:03:16.746119 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:03:16.747900 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:03:16.749322 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:03:16.750722 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:03:16.751556 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:03:16.766710 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:03:16.768864 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:03:16.780161 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:16.781084 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:16.782718 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:03:16.784005 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:03:16.784131 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:03:16.786192 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:03:16.787683 systemd[1]: Stopped target basic.target - Basic System. May 13 00:03:16.788872 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:03:16.790165 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:03:16.791662 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:03:16.793157 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:03:16.794615 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:03:16.796152 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:03:16.797768 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:03:16.799079 systemd[1]: Stopped target swap.target - Swaps. May 13 00:03:16.800279 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:03:16.800405 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:03:16.802286 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:16.803784 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:16.805247 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:03:16.806201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:16.807555 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:03:16.807666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:03:16.809802 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:03:16.809908 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:03:16.811457 systemd[1]: Stopped target paths.target - Path Units. May 13 00:03:16.812638 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:03:16.817183 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:16.818174 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:03:16.819838 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:03:16.821042 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:03:16.821135 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:03:16.822300 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:03:16.822376 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:03:16.823526 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:03:16.823637 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:03:16.825000 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:03:16.825101 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:03:16.836336 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:03:16.837723 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:03:16.838412 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:03:16.838528 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:16.840109 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:03:16.840219 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:03:16.845206 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:03:16.845297 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:03:16.847883 ignition[997]: INFO : Ignition 2.20.0 May 13 00:03:16.847883 ignition[997]: INFO : Stage: umount May 13 00:03:16.849958 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:03:16.849958 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:03:16.849958 ignition[997]: INFO : umount: umount passed May 13 00:03:16.849958 ignition[997]: INFO : Ignition finished successfully May 13 00:03:16.850428 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:03:16.851249 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:03:16.852494 systemd[1]: Stopped target network.target - Network. May 13 00:03:16.853566 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:03:16.853738 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:03:16.855213 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:03:16.855260 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:03:16.857007 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:03:16.857066 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:03:16.862088 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:03:16.862144 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:03:16.863710 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:03:16.865979 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:03:16.870278 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:03:16.872565 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:03:16.874199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:03:16.876555 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:03:16.876615 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:16.881211 systemd-networkd[762]: eth0: DHCPv6 lease lost May 13 00:03:16.882626 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:03:16.882748 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:03:16.884379 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:03:16.884411 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:16.898323 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:03:16.899001 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:03:16.899075 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:03:16.900785 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:03:16.900829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:16.902341 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:03:16.902387 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:03:16.905277 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:16.913958 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:03:16.914124 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:16.918489 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:03:16.918646 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:03:16.920088 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:03:16.920187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:03:16.923537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:03:16.923623 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:03:16.925060 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:03:16.925092 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:16.926422 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:03:16.926467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:03:16.928458 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:03:16.928500 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:03:16.930549 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:03:16.930598 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:03:16.932987 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:03:16.933045 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:03:16.949350 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:03:16.950250 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:03:16.950314 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:16.951915 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 13 00:03:16.951955 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:16.953613 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:03:16.953663 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:16.955266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:03:16.955307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:16.957102 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:03:16.958529 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:03:16.960345 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:03:16.962877 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:03:16.971989 systemd[1]: Switching root. May 13 00:03:17.002030 systemd-journald[239]: Journal stopped May 13 00:03:17.713938 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 13 00:03:17.713989 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:03:17.714002 kernel: SELinux: policy capability open_perms=1 May 13 00:03:17.714024 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:03:17.714036 kernel: SELinux: policy capability always_check_network=0 May 13 00:03:17.714046 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:03:17.714057 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:03:17.714070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:03:17.714080 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:03:17.714089 kernel: audit: type=1403 audit(1747094597.153:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:03:17.714101 systemd[1]: Successfully loaded SELinux policy in 33.630ms. May 13 00:03:17.714121 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.042ms. May 13 00:03:17.714134 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:03:17.714172 systemd[1]: Detected virtualization kvm. May 13 00:03:17.714184 systemd[1]: Detected architecture arm64. May 13 00:03:17.714195 systemd[1]: Detected first boot. May 13 00:03:17.714207 systemd[1]: Initializing machine ID from VM UUID. May 13 00:03:17.714219 zram_generator::config[1043]: No configuration found. May 13 00:03:17.714229 systemd[1]: Populated /etc with preset unit settings. May 13 00:03:17.714240 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 00:03:17.714250 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 00:03:17.714262 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 00:03:17.714273 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:03:17.714283 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:03:17.714293 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:03:17.714303 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:03:17.714314 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:03:17.714325 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:03:17.714335 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:03:17.714346 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:03:17.714357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:03:17.714367 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:03:17.714378 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:03:17.714388 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:03:17.714400 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:03:17.714410 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:03:17.714420 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:03:17.714435 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:03:17.714447 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 00:03:17.714457 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 00:03:17.714467 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 00:03:17.714478 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:03:17.714492 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:03:17.714502 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:03:17.714512 systemd[1]: Reached target slices.target - Slice Units. May 13 00:03:17.714522 systemd[1]: Reached target swap.target - Swaps. May 13 00:03:17.714534 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:03:17.714545 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:03:17.714555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:03:17.714565 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:03:17.714575 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:03:17.714586 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:03:17.714596 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:03:17.714606 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:03:17.714616 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:03:17.714629 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:03:17.714639 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:03:17.714650 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:03:17.714660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:03:17.714670 systemd[1]: Reached target machines.target - Containers. May 13 00:03:17.714681 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:03:17.714691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:17.714701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:03:17.714712 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:03:17.714723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:17.714733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:03:17.714743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:17.714754 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:03:17.714764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:17.714774 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:03:17.714784 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 00:03:17.714795 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 00:03:17.714807 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 00:03:17.714817 systemd[1]: Stopped systemd-fsck-usr.service. May 13 00:03:17.714827 kernel: loop: module loaded May 13 00:03:17.714837 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:03:17.714847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:03:17.714857 kernel: fuse: init (API version 7.39) May 13 00:03:17.714867 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:03:17.714877 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:03:17.714888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:03:17.714899 systemd[1]: verity-setup.service: Deactivated successfully. May 13 00:03:17.714910 kernel: ACPI: bus type drm_connector registered May 13 00:03:17.714919 systemd[1]: Stopped verity-setup.service. May 13 00:03:17.714929 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:03:17.714939 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:03:17.714949 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:03:17.714959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:03:17.714971 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:03:17.714981 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:03:17.714991 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:03:17.715002 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:03:17.715017 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:03:17.715048 systemd-journald[1107]: Collecting audit messages is disabled. May 13 00:03:17.715070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:03:17.715081 systemd-journald[1107]: Journal started May 13 00:03:17.715101 systemd-journald[1107]: Runtime Journal (/run/log/journal/35535143d89944e5a20bb386650e068f) is 5.9M, max 47.3M, 41.4M free. May 13 00:03:17.510332 systemd[1]: Queued start job for default target multi-user.target. May 13 00:03:17.528497 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:03:17.528864 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 00:03:17.717738 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:03:17.718456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:17.718591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:17.719898 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:03:17.720032 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:03:17.721322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:17.721445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:17.722859 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:03:17.722987 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:03:17.724401 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:17.726219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:17.727487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:03:17.728836 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:03:17.730288 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:03:17.741257 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:03:17.756263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:03:17.758254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:03:17.759317 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:03:17.759359 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:03:17.761142 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:03:17.763208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:03:17.765251 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:03:17.766288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:17.767952 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:03:17.770393 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:03:17.771585 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:03:17.772604 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:03:17.773751 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:03:17.775132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:03:17.780240 systemd-journald[1107]: Time spent on flushing to /var/log/journal/35535143d89944e5a20bb386650e068f is 18.097ms for 840 entries. May 13 00:03:17.780240 systemd-journald[1107]: System Journal (/var/log/journal/35535143d89944e5a20bb386650e068f) is 8.0M, max 195.6M, 187.6M free. May 13 00:03:17.821435 systemd-journald[1107]: Received client request to flush runtime journal. May 13 00:03:17.821515 kernel: loop0: detected capacity change from 0 to 113536 May 13 00:03:17.821538 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:03:17.781374 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:03:17.783392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:03:17.786697 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:03:17.788809 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:03:17.789877 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:03:17.791946 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:03:17.793223 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:03:17.796138 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:03:17.808645 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:03:17.814201 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:03:17.820604 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:03:17.823646 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:03:17.825476 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 13 00:03:17.825489 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. May 13 00:03:17.831276 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:03:17.842426 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:03:17.843991 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:03:17.844735 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:03:17.847228 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 13 00:03:17.848189 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:03:17.866943 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:03:17.881173 kernel: loop2: detected capacity change from 0 to 116808 May 13 00:03:17.883372 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:03:17.898316 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 13 00:03:17.898622 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. May 13 00:03:17.904186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:03:17.909184 kernel: loop3: detected capacity change from 0 to 113536 May 13 00:03:17.913329 kernel: loop4: detected capacity change from 0 to 194096 May 13 00:03:17.918170 kernel: loop5: detected capacity change from 0 to 116808 May 13 00:03:17.920865 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:03:17.921269 (sd-merge)[1183]: Merged extensions into '/usr'. May 13 00:03:17.925592 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:03:17.925613 systemd[1]: Reloading... May 13 00:03:17.983288 zram_generator::config[1209]: No configuration found. May 13 00:03:18.039611 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:03:18.074556 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:18.109594 systemd[1]: Reloading finished in 183 ms. May 13 00:03:18.157187 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:03:18.158424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:03:18.176405 systemd[1]: Starting ensure-sysext.service... May 13 00:03:18.178690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:03:18.185159 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... May 13 00:03:18.185172 systemd[1]: Reloading... May 13 00:03:18.195828 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:03:18.196096 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:03:18.196736 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:03:18.196956 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 13 00:03:18.197013 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. May 13 00:03:18.199258 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:03:18.199271 systemd-tmpfiles[1244]: Skipping /boot May 13 00:03:18.205610 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:03:18.205626 systemd-tmpfiles[1244]: Skipping /boot May 13 00:03:18.235178 zram_generator::config[1272]: No configuration found. May 13 00:03:18.314876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:18.349453 systemd[1]: Reloading finished in 164 ms. May 13 00:03:18.365998 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:03:18.378594 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:03:18.385701 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:18.387960 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:03:18.390004 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:03:18.394431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:03:18.401419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:03:18.403726 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:03:18.406925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:18.410686 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:18.412905 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:18.416493 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:18.417465 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:18.419652 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:03:18.427234 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:03:18.428634 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:18.428758 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:18.437684 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:18.437833 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:18.439740 systemd-udevd[1312]: Using default interface naming scheme 'v255'. May 13 00:03:18.440415 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:18.440599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:18.447787 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:18.462412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:18.465515 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:18.473486 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:18.475292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:18.476569 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:03:18.478243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:03:18.480446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:03:18.482750 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:03:18.484100 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:18.484224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:18.486847 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:03:18.488446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:18.488581 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:18.488834 augenrules[1360]: No rules May 13 00:03:18.490762 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:18.490965 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:18.492228 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:18.492348 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:18.493541 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:03:18.510907 systemd[1]: Finished ensure-sysext.service. May 13 00:03:18.526408 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:18.527343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:03:18.532256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:03:18.535322 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:03:18.536165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1346) May 13 00:03:18.537567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:03:18.540829 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:03:18.541919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:03:18.543454 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:03:18.547125 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:03:18.547933 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:03:18.548437 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:03:18.548585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:03:18.550967 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 13 00:03:18.556463 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:03:18.556614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:03:18.561398 augenrules[1383]: /sbin/augenrules: No change May 13 00:03:18.566582 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:03:18.566728 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:03:18.569521 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:03:18.570908 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:03:18.571067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:03:18.571841 augenrules[1412]: No rules May 13 00:03:18.573655 systemd-resolved[1310]: Positive Trust Anchors: May 13 00:03:18.573727 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:03:18.573758 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:03:18.573839 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:18.575288 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:18.578111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:03:18.582753 systemd-resolved[1310]: Defaulting to hostname 'linux'. May 13 00:03:18.584427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:03:18.585918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:03:18.602432 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:03:18.615328 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:03:18.624856 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:03:18.625942 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:03:18.629319 systemd-networkd[1394]: lo: Link UP May 13 00:03:18.629327 systemd-networkd[1394]: lo: Gained carrier May 13 00:03:18.630046 systemd-networkd[1394]: Enumeration completed May 13 00:03:18.630136 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:03:18.631210 systemd[1]: Reached target network.target - Network. May 13 00:03:18.632563 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:18.632575 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:03:18.633213 systemd-networkd[1394]: eth0: Link UP May 13 00:03:18.633222 systemd-networkd[1394]: eth0: Gained carrier May 13 00:03:18.633235 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:03:18.639381 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:03:18.641015 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:03:18.645203 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:03:18.646158 systemd-timesyncd[1396]: Network configuration changed, trying to establish connection. May 13 00:03:18.648817 systemd-timesyncd[1396]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:03:18.648868 systemd-timesyncd[1396]: Initial clock synchronization to Tue 2025-05-13 00:03:18.523220 UTC. May 13 00:03:18.683491 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:03:18.692473 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:03:18.694918 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:03:18.726280 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:03:18.728282 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:03:18.761304 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:03:18.762880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:03:18.763774 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:03:18.764628 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:03:18.765527 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:03:18.766585 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:03:18.767468 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:03:18.768397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:03:18.769255 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:03:18.769287 systemd[1]: Reached target paths.target - Path Units. May 13 00:03:18.769897 systemd[1]: Reached target timers.target - Timer Units. May 13 00:03:18.771455 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:03:18.773917 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:03:18.784923 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:03:18.787125 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:03:18.788754 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:03:18.789952 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:03:18.790891 systemd[1]: Reached target basic.target - Basic System. May 13 00:03:18.791832 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:03:18.791864 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:03:18.792753 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:03:18.797170 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:03:18.794715 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:03:18.797321 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:03:18.801310 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:03:18.803253 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:03:18.806407 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:03:18.810315 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:03:18.814726 jq[1443]: false May 13 00:03:18.814389 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:03:18.817757 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:03:18.819463 extend-filesystems[1444]: Found loop3 May 13 00:03:18.819463 extend-filesystems[1444]: Found loop4 May 13 00:03:18.819463 extend-filesystems[1444]: Found loop5 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda May 13 00:03:18.819463 extend-filesystems[1444]: Found vda1 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda2 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda3 May 13 00:03:18.819463 extend-filesystems[1444]: Found usr May 13 00:03:18.819463 extend-filesystems[1444]: Found vda4 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda6 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda7 May 13 00:03:18.819463 extend-filesystems[1444]: Found vda9 May 13 00:03:18.819463 extend-filesystems[1444]: Checking size of /dev/vda9 May 13 00:03:18.819309 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:03:18.819692 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:03:18.820305 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:03:18.823016 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:03:18.834520 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:03:18.845171 dbus-daemon[1442]: [system] SELinux support is enabled May 13 00:03:18.842611 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:03:18.842791 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:03:18.843060 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:03:18.843207 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:03:18.845986 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:03:18.856209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1346) May 13 00:03:18.856270 extend-filesystems[1444]: Resized partition /dev/vda9 May 13 00:03:18.860045 jq[1453]: true May 13 00:03:18.861209 extend-filesystems[1468]: resize2fs 1.47.1 (20-May-2024) May 13 00:03:18.866066 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:03:18.862446 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:03:18.869841 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:03:18.870629 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:03:18.874308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:03:18.874329 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:03:18.878790 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:03:18.880286 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:03:18.887166 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:03:18.897476 update_engine[1452]: I20250513 00:03:18.887432 1452 main.cc:92] Flatcar Update Engine starting May 13 00:03:18.897476 update_engine[1452]: I20250513 00:03:18.893779 1452 update_check_scheduler.cc:74] Next update check in 5m51s May 13 00:03:18.893741 systemd[1]: Started update-engine.service - Update Engine. May 13 00:03:18.896421 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:03:18.897898 extend-filesystems[1468]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:03:18.897898 extend-filesystems[1468]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:03:18.897898 extend-filesystems[1468]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:03:18.908691 extend-filesystems[1444]: Resized filesystem in /dev/vda9 May 13 00:03:18.909431 jq[1473]: true May 13 00:03:18.899283 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:03:18.899460 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:03:18.899818 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:03:18.901499 systemd-logind[1449]: New seat seat0. May 13 00:03:18.904615 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:03:18.950234 bash[1494]: Updated "/home/core/.ssh/authorized_keys" May 13 00:03:18.951833 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:03:18.953999 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:03:18.962053 locksmithd[1476]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:03:19.038687 containerd[1467]: time="2025-05-13T00:03:19.038611725Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 13 00:03:19.065126 containerd[1467]: time="2025-05-13T00:03:19.065075060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.066496 containerd[1467]: time="2025-05-13T00:03:19.066448839Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:19.066496 containerd[1467]: time="2025-05-13T00:03:19.066487969Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:03:19.066554 containerd[1467]: time="2025-05-13T00:03:19.066504677Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:03:19.066692 containerd[1467]: time="2025-05-13T00:03:19.066667073Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:03:19.066716 containerd[1467]: time="2025-05-13T00:03:19.066693504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.066760 containerd[1467]: time="2025-05-13T00:03:19.066745692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:19.066781 containerd[1467]: time="2025-05-13T00:03:19.066760534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.066983 containerd[1467]: time="2025-05-13T00:03:19.066958211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:19.066983 containerd[1467]: time="2025-05-13T00:03:19.066977459Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.067025 containerd[1467]: time="2025-05-13T00:03:19.066991032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:19.067025 containerd[1467]: time="2025-05-13T00:03:19.067000517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.067078 containerd[1467]: time="2025-05-13T00:03:19.067064769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.067299 containerd[1467]: time="2025-05-13T00:03:19.067281337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:03:19.067392 containerd[1467]: time="2025-05-13T00:03:19.067376861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:03:19.067418 containerd[1467]: time="2025-05-13T00:03:19.067393649Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:03:19.067486 containerd[1467]: time="2025-05-13T00:03:19.067463496Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:03:19.067536 containerd[1467]: time="2025-05-13T00:03:19.067523700Z" level=info msg="metadata content store policy set" policy=shared May 13 00:03:19.070800 containerd[1467]: time="2025-05-13T00:03:19.070737849Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:03:19.070800 containerd[1467]: time="2025-05-13T00:03:19.070788052Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:03:19.070873 containerd[1467]: time="2025-05-13T00:03:19.070802894Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:03:19.070873 containerd[1467]: time="2025-05-13T00:03:19.070817539Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:03:19.070873 containerd[1467]: time="2025-05-13T00:03:19.070830357Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:03:19.070976 containerd[1467]: time="2025-05-13T00:03:19.070959020Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:03:19.071235 containerd[1467]: time="2025-05-13T00:03:19.071218846Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:03:19.071332 containerd[1467]: time="2025-05-13T00:03:19.071315283Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:03:19.071363 containerd[1467]: time="2025-05-13T00:03:19.071336357Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:03:19.071363 containerd[1467]: time="2025-05-13T00:03:19.071351755Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:03:19.071397 containerd[1467]: time="2025-05-13T00:03:19.071365129Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071397 containerd[1467]: time="2025-05-13T00:03:19.071378067Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071397 containerd[1467]: time="2025-05-13T00:03:19.071389378Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071455 containerd[1467]: time="2025-05-13T00:03:19.071403347Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071455 containerd[1467]: time="2025-05-13T00:03:19.071418547Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071455 containerd[1467]: time="2025-05-13T00:03:19.071430572Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071455 containerd[1467]: time="2025-05-13T00:03:19.071450931Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071529 containerd[1467]: time="2025-05-13T00:03:19.071463273Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:03:19.071529 containerd[1467]: time="2025-05-13T00:03:19.071483632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071529 containerd[1467]: time="2025-05-13T00:03:19.071497205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071529 containerd[1467]: time="2025-05-13T00:03:19.071509825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071529 containerd[1467]: time="2025-05-13T00:03:19.071523200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071534867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071548321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071560941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071573442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071591817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071615 containerd[1467]: time="2025-05-13T00:03:19.071605787Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071618288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071629797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071641147Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071654283Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071673729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071686072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071711 containerd[1467]: time="2025-05-13T00:03:19.071696351Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:03:19.071880 containerd[1467]: time="2025-05-13T00:03:19.071868033Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:03:19.071900 containerd[1467]: time="2025-05-13T00:03:19.071886368Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:03:19.071900 containerd[1467]: time="2025-05-13T00:03:19.071897322Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:03:19.071938 containerd[1467]: time="2025-05-13T00:03:19.071909267Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:03:19.071938 containerd[1467]: time="2025-05-13T00:03:19.071918593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:03:19.071938 containerd[1467]: time="2025-05-13T00:03:19.071930857Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:03:19.071988 containerd[1467]: time="2025-05-13T00:03:19.071940183Z" level=info msg="NRI interface is disabled by configuration." May 13 00:03:19.071988 containerd[1467]: time="2025-05-13T00:03:19.071957049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:03:19.072335 containerd[1467]: time="2025-05-13T00:03:19.072291049Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:03:19.072441 containerd[1467]: time="2025-05-13T00:03:19.072342165Z" level=info msg="Connect containerd service" May 13 00:03:19.072441 containerd[1467]: time="2025-05-13T00:03:19.072373398Z" level=info msg="using legacy CRI server" May 13 00:03:19.072441 containerd[1467]: time="2025-05-13T00:03:19.072380263Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:03:19.072624 containerd[1467]: time="2025-05-13T00:03:19.072609094Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:03:19.074990 containerd[1467]: time="2025-05-13T00:03:19.074953598Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075237831Z" level=info msg="Start subscribing containerd event" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075291526Z" level=info msg="Start recovering state" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075354588Z" level=info msg="Start event monitor" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075366097Z" level=info msg="Start snapshots syncer" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075375582Z" level=info msg="Start cni network conf syncer for default" May 13 00:03:19.075500 containerd[1467]: time="2025-05-13T00:03:19.075384392Z" level=info msg="Start streaming server" May 13 00:03:19.075630 containerd[1467]: time="2025-05-13T00:03:19.075521349Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:03:19.075630 containerd[1467]: time="2025-05-13T00:03:19.075562147Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:03:19.075759 containerd[1467]: time="2025-05-13T00:03:19.075745854Z" level=info msg="containerd successfully booted in 0.037989s" May 13 00:03:19.075804 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:03:20.372415 sshd_keygen[1464]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:03:20.392181 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:03:20.398396 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:03:20.403922 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:03:20.404141 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:03:20.407462 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:03:20.420854 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:03:20.423297 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:03:20.425052 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:03:20.426223 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:03:20.565233 systemd-networkd[1394]: eth0: Gained IPv6LL May 13 00:03:20.567647 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:03:20.569100 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:03:20.586466 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:03:20.588525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:20.590326 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:03:20.605554 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:03:20.605729 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:03:20.607189 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:03:20.608467 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:03:21.073489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:21.074888 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:03:21.077813 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:03:21.079245 systemd[1]: Startup finished in 590ms (kernel) + 4.430s (initrd) + 3.971s (userspace) = 8.993s. May 13 00:03:21.554135 kubelet[1548]: E0513 00:03:21.553984 1548 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:03:21.556382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:03:21.556528 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:03:25.548220 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:03:25.549487 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:58656.service - OpenSSH per-connection server daemon (10.0.0.1:58656). May 13 00:03:25.612244 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 58656 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:25.614412 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:25.623376 systemd-logind[1449]: New session 1 of user core. May 13 00:03:25.624413 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:03:25.631436 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:03:25.640764 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:03:25.643092 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:03:25.649850 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:03:25.735763 systemd[1566]: Queued start job for default target default.target. May 13 00:03:25.745029 systemd[1566]: Created slice app.slice - User Application Slice. May 13 00:03:25.745056 systemd[1566]: Reached target paths.target - Paths. May 13 00:03:25.745069 systemd[1566]: Reached target timers.target - Timers. May 13 00:03:25.746255 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:03:25.755954 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:03:25.756015 systemd[1566]: Reached target sockets.target - Sockets. May 13 00:03:25.756027 systemd[1566]: Reached target basic.target - Basic System. May 13 00:03:25.756062 systemd[1566]: Reached target default.target - Main User Target. May 13 00:03:25.756087 systemd[1566]: Startup finished in 100ms. May 13 00:03:25.756549 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:03:25.757990 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:03:25.816571 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:58670.service - OpenSSH per-connection server daemon (10.0.0.1:58670). May 13 00:03:25.854963 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 58670 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:25.856129 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:25.859794 systemd-logind[1449]: New session 2 of user core. May 13 00:03:25.869331 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:03:25.919059 sshd[1579]: Connection closed by 10.0.0.1 port 58670 May 13 00:03:25.919364 sshd-session[1577]: pam_unix(sshd:session): session closed for user core May 13 00:03:25.932549 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:58670.service: Deactivated successfully. May 13 00:03:25.933806 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:03:25.936971 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. May 13 00:03:25.937978 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:58684.service - OpenSSH per-connection server daemon (10.0.0.1:58684). May 13 00:03:25.939542 systemd-logind[1449]: Removed session 2. May 13 00:03:25.976420 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 58684 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:25.977611 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:25.981968 systemd-logind[1449]: New session 3 of user core. May 13 00:03:25.995333 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:03:26.042178 sshd[1586]: Connection closed by 10.0.0.1 port 58684 May 13 00:03:26.042548 sshd-session[1584]: pam_unix(sshd:session): session closed for user core May 13 00:03:26.055393 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:58684.service: Deactivated successfully. May 13 00:03:26.056662 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:03:26.060174 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. May 13 00:03:26.060367 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:58688.service - OpenSSH per-connection server daemon (10.0.0.1:58688). May 13 00:03:26.061913 systemd-logind[1449]: Removed session 3. May 13 00:03:26.099102 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 58688 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:26.100427 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:26.104182 systemd-logind[1449]: New session 4 of user core. May 13 00:03:26.122324 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:03:26.174087 sshd[1593]: Connection closed by 10.0.0.1 port 58688 May 13 00:03:26.174536 sshd-session[1591]: pam_unix(sshd:session): session closed for user core May 13 00:03:26.186173 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:58688.service: Deactivated successfully. May 13 00:03:26.187353 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:03:26.187853 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. May 13 00:03:26.189293 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:58702.service - OpenSSH per-connection server daemon (10.0.0.1:58702). May 13 00:03:26.189958 systemd-logind[1449]: Removed session 4. May 13 00:03:26.226336 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 58702 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:26.227378 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:26.231081 systemd-logind[1449]: New session 5 of user core. May 13 00:03:26.238318 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:03:26.300005 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:03:26.300301 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:26.317738 sudo[1601]: pam_unix(sudo:session): session closed for user root May 13 00:03:26.320788 sshd[1600]: Connection closed by 10.0.0.1 port 58702 May 13 00:03:26.321314 sshd-session[1598]: pam_unix(sshd:session): session closed for user core May 13 00:03:26.329445 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:58702.service: Deactivated successfully. May 13 00:03:26.330730 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:03:26.333131 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. May 13 00:03:26.334243 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:58714.service - OpenSSH per-connection server daemon (10.0.0.1:58714). May 13 00:03:26.334866 systemd-logind[1449]: Removed session 5. May 13 00:03:26.374220 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 58714 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:26.374576 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:26.378331 systemd-logind[1449]: New session 6 of user core. May 13 00:03:26.389295 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:03:26.440130 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:03:26.440396 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:26.443158 sudo[1610]: pam_unix(sudo:session): session closed for user root May 13 00:03:26.447171 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 13 00:03:26.447630 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:26.469425 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 00:03:26.490627 augenrules[1632]: No rules May 13 00:03:26.491792 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:03:26.491955 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 00:03:26.493266 sudo[1609]: pam_unix(sudo:session): session closed for user root May 13 00:03:26.494340 sshd[1608]: Connection closed by 10.0.0.1 port 58714 May 13 00:03:26.494672 sshd-session[1606]: pam_unix(sshd:session): session closed for user core May 13 00:03:26.508330 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:58714.service: Deactivated successfully. May 13 00:03:26.509541 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:03:26.510798 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. May 13 00:03:26.514109 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:58724.service - OpenSSH per-connection server daemon (10.0.0.1:58724). May 13 00:03:26.514928 systemd-logind[1449]: Removed session 6. May 13 00:03:26.552198 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 58724 ssh2: RSA SHA256:SZ6XZJGCUk0YkyeRdV4YqMMB8lYnDHpF+o6Fl5qyEaU May 13 00:03:26.553290 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:03:26.556961 systemd-logind[1449]: New session 7 of user core. May 13 00:03:26.571393 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:03:26.622229 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:03:26.622483 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:03:26.641421 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:03:26.657019 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:03:26.658223 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:03:27.170312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:27.182401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:27.197570 systemd[1]: Reloading requested from client PID 1691 ('systemctl') (unit session-7.scope)... May 13 00:03:27.197585 systemd[1]: Reloading... May 13 00:03:27.272194 zram_generator::config[1729]: No configuration found. May 13 00:03:27.442736 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:03:27.497263 systemd[1]: Reloading finished in 299 ms. May 13 00:03:27.533031 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:03:27.533095 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:03:27.533401 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:27.535738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:03:27.635873 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:03:27.641528 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:03:27.680899 kubelet[1775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:03:27.680899 kubelet[1775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:03:27.680899 kubelet[1775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:03:27.681758 kubelet[1775]: I0513 00:03:27.681711 1775 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:03:29.010899 kubelet[1775]: I0513 00:03:29.010854 1775 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:03:29.010899 kubelet[1775]: I0513 00:03:29.010885 1775 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:03:29.011344 kubelet[1775]: I0513 00:03:29.011085 1775 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:03:29.036915 kubelet[1775]: I0513 00:03:29.036886 1775 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:03:29.050163 kubelet[1775]: I0513 00:03:29.050113 1775 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:03:29.052179 kubelet[1775]: I0513 00:03:29.051878 1775 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:03:29.052499 kubelet[1775]: I0513 00:03:29.051932 1775 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.131","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:03:29.052586 kubelet[1775]: I0513 00:03:29.052562 1775 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:03:29.052586 kubelet[1775]: I0513 00:03:29.052575 1775 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:03:29.052864 kubelet[1775]: I0513 00:03:29.052850 1775 state_mem.go:36] "Initialized new in-memory state store" May 13 00:03:29.053782 kubelet[1775]: I0513 00:03:29.053762 1775 kubelet.go:400] "Attempting to sync node with API server" May 13 00:03:29.053782 kubelet[1775]: I0513 00:03:29.053784 1775 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:03:29.054498 kubelet[1775]: I0513 00:03:29.054042 1775 kubelet.go:312] "Adding apiserver pod source" May 13 00:03:29.054498 kubelet[1775]: I0513 00:03:29.054236 1775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:03:29.054498 kubelet[1775]: E0513 00:03:29.054451 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:29.054498 kubelet[1775]: E0513 00:03:29.054495 1775 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:29.055624 kubelet[1775]: I0513 00:03:29.055596 1775 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 13 00:03:29.056322 kubelet[1775]: I0513 00:03:29.056306 1775 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:03:29.056572 kubelet[1775]: W0513 00:03:29.056559 1775 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:03:29.057589 kubelet[1775]: I0513 00:03:29.057566 1775 server.go:1264] "Started kubelet" May 13 00:03:29.058993 kubelet[1775]: I0513 00:03:29.058799 1775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:03:29.058993 kubelet[1775]: I0513 00:03:29.058957 1775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:03:29.059996 kubelet[1775]: I0513 00:03:29.059242 1775 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:03:29.060228 kubelet[1775]: I0513 00:03:29.060203 1775 server.go:455] "Adding debug handlers to kubelet server" May 13 00:03:29.060362 kubelet[1775]: I0513 00:03:29.060339 1775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:03:29.063774 kubelet[1775]: E0513 00:03:29.062825 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.063774 kubelet[1775]: I0513 00:03:29.063088 1775 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:03:29.063774 kubelet[1775]: I0513 00:03:29.063225 1775 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:03:29.067007 kubelet[1775]: I0513 00:03:29.066974 1775 reconciler.go:26] "Reconciler: start to sync state" May 13 00:03:29.069848 kubelet[1775]: E0513 00:03:29.069820 1775 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:03:29.071248 kubelet[1775]: I0513 00:03:29.071223 1775 factory.go:221] Registration of the containerd container factory successfully May 13 00:03:29.071914 kubelet[1775]: I0513 00:03:29.071438 1775 factory.go:221] Registration of the systemd container factory successfully May 13 00:03:29.071914 kubelet[1775]: W0513 00:03:29.071448 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:03:29.071914 kubelet[1775]: E0513 00:03:29.071472 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:03:29.071914 kubelet[1775]: W0513 00:03:29.071506 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:03:29.071914 kubelet[1775]: E0513 00:03:29.071515 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.131" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:03:29.071914 kubelet[1775]: I0513 00:03:29.071537 1775 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:03:29.071914 kubelet[1775]: W0513 00:03:29.071418 1775 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:03:29.071914 kubelet[1775]: E0513 00:03:29.071627 1775 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:03:29.072158 kubelet[1775]: E0513 00:03:29.071327 1775 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.131\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 00:03:29.075077 kubelet[1775]: E0513 00:03:29.071923 1775 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.131.183eed4a59dfd027 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.131,UID:10.0.0.131,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.131,},FirstTimestamp:2025-05-13 00:03:29.057533991 +0000 UTC m=+1.412463286,LastTimestamp:2025-05-13 00:03:29.057533991 +0000 UTC m=+1.412463286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.131,}" May 13 00:03:29.082668 kubelet[1775]: I0513 00:03:29.082606 1775 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:03:29.082668 kubelet[1775]: I0513 00:03:29.082639 1775 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:03:29.082668 kubelet[1775]: I0513 00:03:29.082659 1775 state_mem.go:36] "Initialized new in-memory state store" May 13 00:03:29.164902 kubelet[1775]: I0513 00:03:29.164870 1775 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.131" May 13 00:03:29.242673 kubelet[1775]: I0513 00:03:29.242628 1775 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.131" May 13 00:03:29.248807 kubelet[1775]: I0513 00:03:29.248759 1775 policy_none.go:49] "None policy: Start" May 13 00:03:29.250583 kubelet[1775]: I0513 00:03:29.250542 1775 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:03:29.250583 kubelet[1775]: I0513 00:03:29.250573 1775 state_mem.go:35] "Initializing new in-memory state store" May 13 00:03:29.256388 kubelet[1775]: E0513 00:03:29.256352 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.256896 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 13 00:03:29.267895 sudo[1643]: pam_unix(sudo:session): session closed for user root May 13 00:03:29.270021 sshd[1642]: Connection closed by 10.0.0.1 port 58724 May 13 00:03:29.270065 sshd-session[1640]: pam_unix(sshd:session): session closed for user core May 13 00:03:29.272437 kubelet[1775]: I0513 00:03:29.272118 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:03:29.273402 kubelet[1775]: I0513 00:03:29.273378 1775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:03:29.273898 kubelet[1775]: I0513 00:03:29.273511 1775 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:03:29.273898 kubelet[1775]: I0513 00:03:29.273531 1775 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:03:29.273898 kubelet[1775]: E0513 00:03:29.273574 1775 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 13 00:03:29.277747 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:58724.service: Deactivated successfully. May 13 00:03:29.280798 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:03:29.282425 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. May 13 00:03:29.283757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 13 00:03:29.285403 systemd-logind[1449]: Removed session 7. May 13 00:03:29.288833 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 13 00:03:29.297025 kubelet[1775]: I0513 00:03:29.296979 1775 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:03:29.297261 kubelet[1775]: I0513 00:03:29.297213 1775 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:03:29.297651 kubelet[1775]: I0513 00:03:29.297443 1775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:03:29.299734 kubelet[1775]: E0513 00:03:29.299710 1775 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.131\" not found" May 13 00:03:29.356912 kubelet[1775]: E0513 00:03:29.356867 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.457577 kubelet[1775]: E0513 00:03:29.457523 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.558348 kubelet[1775]: E0513 00:03:29.558254 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.658756 kubelet[1775]: E0513 00:03:29.658720 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.759266 kubelet[1775]: E0513 00:03:29.759234 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.859848 kubelet[1775]: E0513 00:03:29.859761 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:29.960291 kubelet[1775]: E0513 00:03:29.960255 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:30.012757 kubelet[1775]: I0513 00:03:30.012633 1775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:03:30.013058 kubelet[1775]: W0513 00:03:30.012818 1775 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:03:30.054993 kubelet[1775]: E0513 00:03:30.054947 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:30.061090 kubelet[1775]: E0513 00:03:30.061041 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:30.161523 kubelet[1775]: E0513 00:03:30.161395 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:30.261901 kubelet[1775]: E0513 00:03:30.261831 1775 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.131\" not found" May 13 00:03:30.363083 kubelet[1775]: I0513 00:03:30.363046 1775 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:03:30.363594 containerd[1467]: time="2025-05-13T00:03:30.363517845Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:03:30.364308 kubelet[1775]: I0513 00:03:30.364050 1775 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:03:31.055357 kubelet[1775]: I0513 00:03:31.055207 1775 apiserver.go:52] "Watching apiserver" May 13 00:03:31.055357 kubelet[1775]: E0513 00:03:31.055300 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:31.067238 kubelet[1775]: I0513 00:03:31.067200 1775 topology_manager.go:215] "Topology Admit Handler" podUID="c04fcf87-72e2-49a0-8cae-26b4b50144cf" podNamespace="kube-system" podName="kube-proxy-75k9b" May 13 00:03:31.067291 kubelet[1775]: I0513 00:03:31.067281 1775 topology_manager.go:215] "Topology Admit Handler" podUID="0ce01cfb-4f20-409c-bf5f-489638a49c07" podNamespace="calico-system" podName="calico-node-xpxjn" May 13 00:03:31.067383 kubelet[1775]: I0513 00:03:31.067356 1775 topology_manager.go:215] "Topology Admit Handler" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" podNamespace="calico-system" podName="csi-node-driver-b6gpp" May 13 00:03:31.067870 kubelet[1775]: E0513 00:03:31.067700 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:31.076778 systemd[1]: Created slice kubepods-besteffort-pod0ce01cfb_4f20_409c_bf5f_489638a49c07.slice - libcontainer container kubepods-besteffort-pod0ce01cfb_4f20_409c_bf5f_489638a49c07.slice. May 13 00:03:31.089332 systemd[1]: Created slice kubepods-besteffort-podc04fcf87_72e2_49a0_8cae_26b4b50144cf.slice - libcontainer container kubepods-besteffort-podc04fcf87_72e2_49a0_8cae_26b4b50144cf.slice. May 13 00:03:31.163887 kubelet[1775]: I0513 00:03:31.163823 1775 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:03:31.178137 kubelet[1775]: I0513 00:03:31.178105 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbcc\" (UniqueName: \"kubernetes.io/projected/c04fcf87-72e2-49a0-8cae-26b4b50144cf-kube-api-access-2fbcc\") pod \"kube-proxy-75k9b\" (UID: \"c04fcf87-72e2-49a0-8cae-26b4b50144cf\") " pod="kube-system/kube-proxy-75k9b" May 13 00:03:31.178137 kubelet[1775]: I0513 00:03:31.178139 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-var-run-calico\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178291 kubelet[1775]: I0513 00:03:31.178189 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-var-lib-calico\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178291 kubelet[1775]: I0513 00:03:31.178208 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-cni-log-dir\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178291 kubelet[1775]: I0513 00:03:31.178224 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b8cf1bac-d87a-4a81-a025-a39d077da472-varrun\") pod \"csi-node-driver-b6gpp\" (UID: \"b8cf1bac-d87a-4a81-a025-a39d077da472\") " pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:31.178291 kubelet[1775]: I0513 00:03:31.178238 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b8cf1bac-d87a-4a81-a025-a39d077da472-kubelet-dir\") pod \"csi-node-driver-b6gpp\" (UID: \"b8cf1bac-d87a-4a81-a025-a39d077da472\") " pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:31.178291 kubelet[1775]: I0513 00:03:31.178254 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b8cf1bac-d87a-4a81-a025-a39d077da472-registration-dir\") pod \"csi-node-driver-b6gpp\" (UID: \"b8cf1bac-d87a-4a81-a025-a39d077da472\") " pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:31.178395 kubelet[1775]: I0513 00:03:31.178269 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-xtables-lock\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178395 kubelet[1775]: I0513 00:03:31.178284 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ce01cfb-4f20-409c-bf5f-489638a49c07-tigera-ca-bundle\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178395 kubelet[1775]: I0513 00:03:31.178299 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-cni-bin-dir\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178395 kubelet[1775]: I0513 00:03:31.178313 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b8cf1bac-d87a-4a81-a025-a39d077da472-socket-dir\") pod \"csi-node-driver-b6gpp\" (UID: \"b8cf1bac-d87a-4a81-a025-a39d077da472\") " pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:31.178395 kubelet[1775]: I0513 00:03:31.178329 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c04fcf87-72e2-49a0-8cae-26b4b50144cf-kube-proxy\") pod \"kube-proxy-75k9b\" (UID: \"c04fcf87-72e2-49a0-8cae-26b4b50144cf\") " pod="kube-system/kube-proxy-75k9b" May 13 00:03:31.178492 kubelet[1775]: I0513 00:03:31.178351 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c04fcf87-72e2-49a0-8cae-26b4b50144cf-xtables-lock\") pod \"kube-proxy-75k9b\" (UID: \"c04fcf87-72e2-49a0-8cae-26b4b50144cf\") " pod="kube-system/kube-proxy-75k9b" May 13 00:03:31.178492 kubelet[1775]: I0513 00:03:31.178366 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-lib-modules\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178492 kubelet[1775]: I0513 00:03:31.178380 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-cni-net-dir\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178492 kubelet[1775]: I0513 00:03:31.178394 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c04fcf87-72e2-49a0-8cae-26b4b50144cf-lib-modules\") pod \"kube-proxy-75k9b\" (UID: \"c04fcf87-72e2-49a0-8cae-26b4b50144cf\") " pod="kube-system/kube-proxy-75k9b" May 13 00:03:31.178492 kubelet[1775]: I0513 00:03:31.178407 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-policysync\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178587 kubelet[1775]: I0513 00:03:31.178423 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0ce01cfb-4f20-409c-bf5f-489638a49c07-node-certs\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178587 kubelet[1775]: I0513 00:03:31.178438 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0ce01cfb-4f20-409c-bf5f-489638a49c07-flexvol-driver-host\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178587 kubelet[1775]: I0513 00:03:31.178453 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww75v\" (UniqueName: \"kubernetes.io/projected/0ce01cfb-4f20-409c-bf5f-489638a49c07-kube-api-access-ww75v\") pod \"calico-node-xpxjn\" (UID: \"0ce01cfb-4f20-409c-bf5f-489638a49c07\") " pod="calico-system/calico-node-xpxjn" May 13 00:03:31.178587 kubelet[1775]: I0513 00:03:31.178467 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgcx\" (UniqueName: \"kubernetes.io/projected/b8cf1bac-d87a-4a81-a025-a39d077da472-kube-api-access-4jgcx\") pod \"csi-node-driver-b6gpp\" (UID: \"b8cf1bac-d87a-4a81-a025-a39d077da472\") " pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:31.281680 kubelet[1775]: E0513 00:03:31.281566 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.281680 kubelet[1775]: W0513 00:03:31.281592 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.281680 kubelet[1775]: E0513 00:03:31.281610 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.282227 kubelet[1775]: E0513 00:03:31.282111 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.282227 kubelet[1775]: W0513 00:03:31.282130 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.282227 kubelet[1775]: E0513 00:03:31.282163 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.282358 kubelet[1775]: E0513 00:03:31.282345 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.282358 kubelet[1775]: W0513 00:03:31.282353 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.282401 kubelet[1775]: E0513 00:03:31.282363 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.282490 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284015 kubelet[1775]: W0513 00:03:31.282506 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.282644 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284015 kubelet[1775]: W0513 00:03:31.282660 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.282817 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284015 kubelet[1775]: W0513 00:03:31.282826 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.282951 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284015 kubelet[1775]: W0513 00:03:31.282961 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.283072 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284015 kubelet[1775]: W0513 00:03:31.283079 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284015 kubelet[1775]: E0513 00:03:31.283226 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284318 kubelet[1775]: W0513 00:03:31.283233 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283260 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283291 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283312 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283477 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283507 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284318 kubelet[1775]: E0513 00:03:31.283523 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284574 kubelet[1775]: E0513 00:03:31.284537 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284574 kubelet[1775]: W0513 00:03:31.284560 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284629 kubelet[1775]: E0513 00:03:31.284612 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.284728 kubelet[1775]: E0513 00:03:31.284718 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.284728 kubelet[1775]: W0513 00:03:31.284728 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.284786 kubelet[1775]: E0513 00:03:31.284747 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285092 kubelet[1775]: E0513 00:03:31.285076 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285119 kubelet[1775]: W0513 00:03:31.285092 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.285156 kubelet[1775]: E0513 00:03:31.285134 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285281 kubelet[1775]: E0513 00:03:31.285266 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285281 kubelet[1775]: W0513 00:03:31.285279 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.285367 kubelet[1775]: E0513 00:03:31.285353 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285437 kubelet[1775]: E0513 00:03:31.285428 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285437 kubelet[1775]: W0513 00:03:31.285436 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.285532 kubelet[1775]: E0513 00:03:31.285501 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285608 kubelet[1775]: E0513 00:03:31.285595 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285608 kubelet[1775]: W0513 00:03:31.285607 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.285894 kubelet[1775]: E0513 00:03:31.285678 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285894 kubelet[1775]: E0513 00:03:31.285745 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285894 kubelet[1775]: W0513 00:03:31.285752 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.285894 kubelet[1775]: E0513 00:03:31.285773 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.285894 kubelet[1775]: E0513 00:03:31.285890 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.285894 kubelet[1775]: W0513 00:03:31.285898 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286046 kubelet[1775]: E0513 00:03:31.285963 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286089 kubelet[1775]: E0513 00:03:31.286073 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286089 kubelet[1775]: W0513 00:03:31.286084 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286171 kubelet[1775]: E0513 00:03:31.286156 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286223 kubelet[1775]: E0513 00:03:31.286211 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286223 kubelet[1775]: W0513 00:03:31.286220 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286315 kubelet[1775]: E0513 00:03:31.286289 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286372 kubelet[1775]: E0513 00:03:31.286359 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286372 kubelet[1775]: W0513 00:03:31.286370 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286452 kubelet[1775]: E0513 00:03:31.286434 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286499 kubelet[1775]: E0513 00:03:31.286486 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286499 kubelet[1775]: W0513 00:03:31.286497 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286584 kubelet[1775]: E0513 00:03:31.286519 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286627 kubelet[1775]: E0513 00:03:31.286616 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286627 kubelet[1775]: W0513 00:03:31.286624 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286694 kubelet[1775]: E0513 00:03:31.286681 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286752 kubelet[1775]: E0513 00:03:31.286743 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286752 kubelet[1775]: W0513 00:03:31.286751 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286837 kubelet[1775]: E0513 00:03:31.286824 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.286900 kubelet[1775]: E0513 00:03:31.286890 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.286900 kubelet[1775]: W0513 00:03:31.286899 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.286983 kubelet[1775]: E0513 00:03:31.286962 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287041 kubelet[1775]: E0513 00:03:31.287028 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287041 kubelet[1775]: W0513 00:03:31.287039 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287126 kubelet[1775]: E0513 00:03:31.287063 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287226 kubelet[1775]: E0513 00:03:31.287214 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287226 kubelet[1775]: W0513 00:03:31.287225 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287287 kubelet[1775]: E0513 00:03:31.287237 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287446 kubelet[1775]: E0513 00:03:31.287434 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287446 kubelet[1775]: W0513 00:03:31.287445 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287524 kubelet[1775]: E0513 00:03:31.287512 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287578 kubelet[1775]: E0513 00:03:31.287569 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287606 kubelet[1775]: W0513 00:03:31.287578 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287675 kubelet[1775]: E0513 00:03:31.287642 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287710 kubelet[1775]: E0513 00:03:31.287704 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287737 kubelet[1775]: W0513 00:03:31.287712 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287808 kubelet[1775]: E0513 00:03:31.287780 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287858 kubelet[1775]: E0513 00:03:31.287844 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.287858 kubelet[1775]: W0513 00:03:31.287855 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.287933 kubelet[1775]: E0513 00:03:31.287920 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.287996 kubelet[1775]: E0513 00:03:31.287987 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288020 kubelet[1775]: W0513 00:03:31.287996 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.288091 kubelet[1775]: E0513 00:03:31.288055 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.288136 kubelet[1775]: E0513 00:03:31.288123 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288136 kubelet[1775]: W0513 00:03:31.288133 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.288225 kubelet[1775]: E0513 00:03:31.288210 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.288789 kubelet[1775]: E0513 00:03:31.288489 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288789 kubelet[1775]: W0513 00:03:31.288504 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.288789 kubelet[1775]: E0513 00:03:31.288656 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288789 kubelet[1775]: W0513 00:03:31.288664 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.288908 kubelet[1775]: E0513 00:03:31.288811 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288908 kubelet[1775]: W0513 00:03:31.288820 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.288979 kubelet[1775]: E0513 00:03:31.288958 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.288979 kubelet[1775]: W0513 00:03:31.288971 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.289125 kubelet[1775]: E0513 00:03:31.289106 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.289125 kubelet[1775]: W0513 00:03:31.289117 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.289204 kubelet[1775]: E0513 00:03:31.289178 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289226 kubelet[1775]: E0513 00:03:31.289193 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289226 kubelet[1775]: E0513 00:03:31.289211 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289262 kubelet[1775]: E0513 00:03:31.289206 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289262 kubelet[1775]: E0513 00:03:31.289255 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289323 kubelet[1775]: E0513 00:03:31.289304 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.289355 kubelet[1775]: W0513 00:03:31.289333 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.289355 kubelet[1775]: E0513 00:03:31.289352 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.289582 kubelet[1775]: E0513 00:03:31.289557 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.289582 kubelet[1775]: W0513 00:03:31.289570 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.289582 kubelet[1775]: E0513 00:03:31.289580 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.293850 kubelet[1775]: E0513 00:03:31.293824 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.293850 kubelet[1775]: W0513 00:03:31.293843 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.293945 kubelet[1775]: E0513 00:03:31.293860 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.295764 kubelet[1775]: E0513 00:03:31.294028 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.295764 kubelet[1775]: W0513 00:03:31.294045 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.295764 kubelet[1775]: E0513 00:03:31.294056 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.296434 kubelet[1775]: E0513 00:03:31.296337 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.296434 kubelet[1775]: W0513 00:03:31.296380 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.296434 kubelet[1775]: E0513 00:03:31.296394 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.299294 kubelet[1775]: E0513 00:03:31.299238 1775 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:03:31.299294 kubelet[1775]: W0513 00:03:31.299253 1775 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:03:31.299294 kubelet[1775]: E0513 00:03:31.299266 1775 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:03:31.388345 kubelet[1775]: E0513 00:03:31.388190 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:31.388884 containerd[1467]: time="2025-05-13T00:03:31.388850741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xpxjn,Uid:0ce01cfb-4f20-409c-bf5f-489638a49c07,Namespace:calico-system,Attempt:0,}" May 13 00:03:31.391760 kubelet[1775]: E0513 00:03:31.391674 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:31.392601 containerd[1467]: time="2025-05-13T00:03:31.392443782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75k9b,Uid:c04fcf87-72e2-49a0-8cae-26b4b50144cf,Namespace:kube-system,Attempt:0,}" May 13 00:03:31.923660 containerd[1467]: time="2025-05-13T00:03:31.923592644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:31.925647 containerd[1467]: time="2025-05-13T00:03:31.925589159Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:03:31.926401 containerd[1467]: time="2025-05-13T00:03:31.926373313Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:31.927249 containerd[1467]: time="2025-05-13T00:03:31.927224146Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:31.927858 containerd[1467]: time="2025-05-13T00:03:31.927660761Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:03:31.930742 containerd[1467]: time="2025-05-13T00:03:31.930704635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:03:31.931899 containerd[1467]: time="2025-05-13T00:03:31.931618359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.100645ms" May 13 00:03:31.937137 containerd[1467]: time="2025-05-13T00:03:31.936915297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.984487ms" May 13 00:03:32.033381 containerd[1467]: time="2025-05-13T00:03:32.033280359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:32.033381 containerd[1467]: time="2025-05-13T00:03:32.033344142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:32.034762 containerd[1467]: time="2025-05-13T00:03:32.033361404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:32.034762 containerd[1467]: time="2025-05-13T00:03:32.033435791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:32.034869 containerd[1467]: time="2025-05-13T00:03:32.033527200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:32.034869 containerd[1467]: time="2025-05-13T00:03:32.033593773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:32.034869 containerd[1467]: time="2025-05-13T00:03:32.033604058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:32.034869 containerd[1467]: time="2025-05-13T00:03:32.033711254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:32.058431 kubelet[1775]: E0513 00:03:32.058390 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:32.134328 systemd[1]: Started cri-containerd-848ac953acfbc6e81197a27b68c122906cbabfd5d1f223211d1aee6816424be2.scope - libcontainer container 848ac953acfbc6e81197a27b68c122906cbabfd5d1f223211d1aee6816424be2. May 13 00:03:32.135355 systemd[1]: Started cri-containerd-ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47.scope - libcontainer container ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47. May 13 00:03:32.169106 containerd[1467]: time="2025-05-13T00:03:32.165777730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xpxjn,Uid:0ce01cfb-4f20-409c-bf5f-489638a49c07,Namespace:calico-system,Attempt:0,} returns sandbox id \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\"" May 13 00:03:32.169106 containerd[1467]: time="2025-05-13T00:03:32.168354924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:03:32.169263 kubelet[1775]: E0513 00:03:32.166874 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:32.170702 containerd[1467]: time="2025-05-13T00:03:32.170625920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75k9b,Uid:c04fcf87-72e2-49a0-8cae-26b4b50144cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"848ac953acfbc6e81197a27b68c122906cbabfd5d1f223211d1aee6816424be2\"" May 13 00:03:32.171314 kubelet[1775]: E0513 00:03:32.171292 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:32.294124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766067248.mount: Deactivated successfully. May 13 00:03:33.059086 kubelet[1775]: E0513 00:03:33.059044 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:33.093890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3926600852.mount: Deactivated successfully. May 13 00:03:33.150927 containerd[1467]: time="2025-05-13T00:03:33.150872495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:33.151988 containerd[1467]: time="2025-05-13T00:03:33.151792960Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 13 00:03:33.152899 containerd[1467]: time="2025-05-13T00:03:33.152709120Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:33.155246 containerd[1467]: time="2025-05-13T00:03:33.155055599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:33.155869 containerd[1467]: time="2025-05-13T00:03:33.155754770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 987.366357ms" May 13 00:03:33.155869 containerd[1467]: time="2025-05-13T00:03:33.155792091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:03:33.157254 containerd[1467]: time="2025-05-13T00:03:33.157228631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:03:33.158655 containerd[1467]: time="2025-05-13T00:03:33.158523224Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:03:33.174974 containerd[1467]: time="2025-05-13T00:03:33.174934184Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9\"" May 13 00:03:33.175860 containerd[1467]: time="2025-05-13T00:03:33.175831523Z" level=info msg="StartContainer for \"e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9\"" May 13 00:03:33.201305 systemd[1]: Started cri-containerd-e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9.scope - libcontainer container e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9. May 13 00:03:33.224536 containerd[1467]: time="2025-05-13T00:03:33.224495738Z" level=info msg="StartContainer for \"e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9\" returns successfully" May 13 00:03:33.244765 systemd[1]: cri-containerd-e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9.scope: Deactivated successfully. May 13 00:03:33.275088 kubelet[1775]: E0513 00:03:33.274710 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:33.284299 kubelet[1775]: E0513 00:03:33.284270 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:33.293884 containerd[1467]: time="2025-05-13T00:03:33.293816577Z" level=info msg="shim disconnected" id=e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9 namespace=k8s.io May 13 00:03:33.293884 containerd[1467]: time="2025-05-13T00:03:33.293872998Z" level=warning msg="cleaning up after shim disconnected" id=e59f8d09c38ba11339c6f0d5eb315443c5b6dd55b94f13f9da7c4e623a4b64c9 namespace=k8s.io May 13 00:03:33.293884 containerd[1467]: time="2025-05-13T00:03:33.293881929Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:03:34.060092 kubelet[1775]: E0513 00:03:34.060032 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:34.152564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3219793341.mount: Deactivated successfully. May 13 00:03:34.286949 kubelet[1775]: E0513 00:03:34.286916 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:34.355477 containerd[1467]: time="2025-05-13T00:03:34.355355810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:34.355968 containerd[1467]: time="2025-05-13T00:03:34.355922995Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 00:03:34.356973 containerd[1467]: time="2025-05-13T00:03:34.356922528Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:34.359343 containerd[1467]: time="2025-05-13T00:03:34.359304610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:34.360048 containerd[1467]: time="2025-05-13T00:03:34.360011617Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.202753399s" May 13 00:03:34.360048 containerd[1467]: time="2025-05-13T00:03:34.360041648Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:03:34.361968 containerd[1467]: time="2025-05-13T00:03:34.361726573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:03:34.362467 containerd[1467]: time="2025-05-13T00:03:34.362440679Z" level=info msg="CreateContainer within sandbox \"848ac953acfbc6e81197a27b68c122906cbabfd5d1f223211d1aee6816424be2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:03:34.381949 containerd[1467]: time="2025-05-13T00:03:34.381900369Z" level=info msg="CreateContainer within sandbox \"848ac953acfbc6e81197a27b68c122906cbabfd5d1f223211d1aee6816424be2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d1d2b593bad35d8080dc594d78d90e7d969c57427b9f60e7fbae35c386549fbe\"" May 13 00:03:34.382410 containerd[1467]: time="2025-05-13T00:03:34.382376307Z" level=info msg="StartContainer for \"d1d2b593bad35d8080dc594d78d90e7d969c57427b9f60e7fbae35c386549fbe\"" May 13 00:03:34.409306 systemd[1]: Started cri-containerd-d1d2b593bad35d8080dc594d78d90e7d969c57427b9f60e7fbae35c386549fbe.scope - libcontainer container d1d2b593bad35d8080dc594d78d90e7d969c57427b9f60e7fbae35c386549fbe. May 13 00:03:34.432639 containerd[1467]: time="2025-05-13T00:03:34.432577933Z" level=info msg="StartContainer for \"d1d2b593bad35d8080dc594d78d90e7d969c57427b9f60e7fbae35c386549fbe\" returns successfully" May 13 00:03:35.060568 kubelet[1775]: E0513 00:03:35.060530 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:35.275164 kubelet[1775]: E0513 00:03:35.275087 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:35.293362 kubelet[1775]: E0513 00:03:35.293078 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:35.303329 kubelet[1775]: I0513 00:03:35.303165 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75k9b" podStartSLOduration=4.113917216 podStartE2EDuration="6.303130277s" podCreationTimestamp="2025-05-13 00:03:29 +0000 UTC" firstStartedPulling="2025-05-13 00:03:32.17180475 +0000 UTC m=+4.526734045" lastFinishedPulling="2025-05-13 00:03:34.361017811 +0000 UTC m=+6.715947106" observedRunningTime="2025-05-13 00:03:35.302803074 +0000 UTC m=+7.657732368" watchObservedRunningTime="2025-05-13 00:03:35.303130277 +0000 UTC m=+7.658059572" May 13 00:03:36.060792 kubelet[1775]: E0513 00:03:36.060740 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:36.294974 kubelet[1775]: E0513 00:03:36.294938 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:36.351554 containerd[1467]: time="2025-05-13T00:03:36.351415129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:36.356558 containerd[1467]: time="2025-05-13T00:03:36.356498622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:03:36.358615 containerd[1467]: time="2025-05-13T00:03:36.358548760Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:36.362697 containerd[1467]: time="2025-05-13T00:03:36.361473202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:36.362697 containerd[1467]: time="2025-05-13T00:03:36.362295802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.000475148s" May 13 00:03:36.362697 containerd[1467]: time="2025-05-13T00:03:36.362321495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:03:36.365053 containerd[1467]: time="2025-05-13T00:03:36.365016579Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:03:36.378779 containerd[1467]: time="2025-05-13T00:03:36.378733048Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d\"" May 13 00:03:36.379461 containerd[1467]: time="2025-05-13T00:03:36.379423794Z" level=info msg="StartContainer for \"05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d\"" May 13 00:03:36.410570 systemd[1]: Started cri-containerd-05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d.scope - libcontainer container 05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d. May 13 00:03:36.440113 containerd[1467]: time="2025-05-13T00:03:36.438864337Z" level=info msg="StartContainer for \"05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d\" returns successfully" May 13 00:03:36.926926 containerd[1467]: time="2025-05-13T00:03:36.926833132Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:03:36.928480 systemd[1]: cri-containerd-05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d.scope: Deactivated successfully. May 13 00:03:36.950998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d-rootfs.mount: Deactivated successfully. May 13 00:03:36.988536 kubelet[1775]: I0513 00:03:36.988486 1775 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:03:37.061217 kubelet[1775]: E0513 00:03:37.061140 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:37.135110 containerd[1467]: time="2025-05-13T00:03:37.135045549Z" level=info msg="shim disconnected" id=05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d namespace=k8s.io May 13 00:03:37.135110 containerd[1467]: time="2025-05-13T00:03:37.135108793Z" level=warning msg="cleaning up after shim disconnected" id=05989c8faddbf6b2fd14938deb79c3e374cb26291cf5558929bb744107f7854d namespace=k8s.io May 13 00:03:37.135110 containerd[1467]: time="2025-05-13T00:03:37.135117731Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:03:37.282931 systemd[1]: Created slice kubepods-besteffort-podb8cf1bac_d87a_4a81_a025_a39d077da472.slice - libcontainer container kubepods-besteffort-podb8cf1bac_d87a_4a81_a025_a39d077da472.slice. May 13 00:03:37.285071 containerd[1467]: time="2025-05-13T00:03:37.285033159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:0,}" May 13 00:03:37.299891 kubelet[1775]: E0513 00:03:37.299739 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:37.300559 containerd[1467]: time="2025-05-13T00:03:37.300472364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:03:37.442644 containerd[1467]: time="2025-05-13T00:03:37.442575339Z" level=error msg="Failed to destroy network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:37.444189 containerd[1467]: time="2025-05-13T00:03:37.443488531Z" level=error msg="encountered an error cleaning up failed sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:37.444189 containerd[1467]: time="2025-05-13T00:03:37.443575039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:37.444292 kubelet[1775]: E0513 00:03:37.443886 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:37.444292 kubelet[1775]: E0513 00:03:37.443959 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:37.444292 kubelet[1775]: E0513 00:03:37.443985 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:37.444204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837-shm.mount: Deactivated successfully. May 13 00:03:37.444558 kubelet[1775]: E0513 00:03:37.444025 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:38.062268 kubelet[1775]: E0513 00:03:38.062204 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:38.302804 kubelet[1775]: I0513 00:03:38.302313 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837" May 13 00:03:38.304074 containerd[1467]: time="2025-05-13T00:03:38.303761351Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\"" May 13 00:03:38.304074 containerd[1467]: time="2025-05-13T00:03:38.303931439Z" level=info msg="Ensure that sandbox cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837 in task-service has been cleanup successfully" May 13 00:03:38.306432 containerd[1467]: time="2025-05-13T00:03:38.304361647Z" level=info msg="TearDown network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" successfully" May 13 00:03:38.306432 containerd[1467]: time="2025-05-13T00:03:38.304381242Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" returns successfully" May 13 00:03:38.306432 containerd[1467]: time="2025-05-13T00:03:38.305969019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:1,}" May 13 00:03:38.305367 systemd[1]: run-netns-cni\x2db307581e\x2d9529\x2dfa54\x2dc3c7\x2d46fd529e02d9.mount: Deactivated successfully. May 13 00:03:38.406282 containerd[1467]: time="2025-05-13T00:03:38.406135678Z" level=error msg="Failed to destroy network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:38.406879 containerd[1467]: time="2025-05-13T00:03:38.406843684Z" level=error msg="encountered an error cleaning up failed sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:38.407024 containerd[1467]: time="2025-05-13T00:03:38.406916077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:38.407646 kubelet[1775]: E0513 00:03:38.407308 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:38.407646 kubelet[1775]: E0513 00:03:38.407371 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:38.407646 kubelet[1775]: E0513 00:03:38.407392 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:38.407798 kubelet[1775]: E0513 00:03:38.407438 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:38.408471 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e-shm.mount: Deactivated successfully. May 13 00:03:39.062549 kubelet[1775]: E0513 00:03:39.062511 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:39.305155 kubelet[1775]: I0513 00:03:39.304802 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e" May 13 00:03:39.305609 containerd[1467]: time="2025-05-13T00:03:39.305385354Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\"" May 13 00:03:39.305609 containerd[1467]: time="2025-05-13T00:03:39.305543213Z" level=info msg="Ensure that sandbox 0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e in task-service has been cleanup successfully" May 13 00:03:39.306972 systemd[1]: run-netns-cni\x2d47bb2f0b\x2ddd05\x2d6e56\x2d6c6e\x2d2b58fdf156a7.mount: Deactivated successfully. May 13 00:03:39.307647 containerd[1467]: time="2025-05-13T00:03:39.307613336Z" level=info msg="TearDown network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" successfully" May 13 00:03:39.307647 containerd[1467]: time="2025-05-13T00:03:39.307644709Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" returns successfully" May 13 00:03:39.308390 containerd[1467]: time="2025-05-13T00:03:39.307982698Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\"" May 13 00:03:39.308390 containerd[1467]: time="2025-05-13T00:03:39.308066716Z" level=info msg="TearDown network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" successfully" May 13 00:03:39.308390 containerd[1467]: time="2025-05-13T00:03:39.308077014Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" returns successfully" May 13 00:03:39.308789 containerd[1467]: time="2025-05-13T00:03:39.308760616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:2,}" May 13 00:03:39.380631 containerd[1467]: time="2025-05-13T00:03:39.380501332Z" level=error msg="Failed to destroy network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:39.381126 containerd[1467]: time="2025-05-13T00:03:39.381095088Z" level=error msg="encountered an error cleaning up failed sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:39.381289 containerd[1467]: time="2025-05-13T00:03:39.381267435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:39.381633 kubelet[1775]: E0513 00:03:39.381590 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:39.381800 kubelet[1775]: E0513 00:03:39.381773 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:39.381942 kubelet[1775]: E0513 00:03:39.381877 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:39.385204 kubelet[1775]: E0513 00:03:39.381990 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:39.382190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec-shm.mount: Deactivated successfully. May 13 00:03:40.063323 kubelet[1775]: E0513 00:03:40.063284 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:40.090272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948227272.mount: Deactivated successfully. May 13 00:03:40.250809 containerd[1467]: time="2025-05-13T00:03:40.250754118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:40.251255 containerd[1467]: time="2025-05-13T00:03:40.251208397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:03:40.251928 containerd[1467]: time="2025-05-13T00:03:40.251899596Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:40.255738 containerd[1467]: time="2025-05-13T00:03:40.255228529Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:40.255738 containerd[1467]: time="2025-05-13T00:03:40.255588439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 2.955078205s" May 13 00:03:40.255738 containerd[1467]: time="2025-05-13T00:03:40.255609117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:03:40.262526 containerd[1467]: time="2025-05-13T00:03:40.262483544Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:03:40.272752 containerd[1467]: time="2025-05-13T00:03:40.272712611Z" level=info msg="CreateContainer within sandbox \"ab8a798b8c2ba216875068e65dce23b48fd2426e1794c82db7b7a4c0089cff47\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c6ee98e84adbc02e1ce03144d23a93bc7d3223292df95bfe7f7388fd9efb862e\"" May 13 00:03:40.273251 containerd[1467]: time="2025-05-13T00:03:40.273192438Z" level=info msg="StartContainer for \"c6ee98e84adbc02e1ce03144d23a93bc7d3223292df95bfe7f7388fd9efb862e\"" May 13 00:03:40.301313 systemd[1]: Started cri-containerd-c6ee98e84adbc02e1ce03144d23a93bc7d3223292df95bfe7f7388fd9efb862e.scope - libcontainer container c6ee98e84adbc02e1ce03144d23a93bc7d3223292df95bfe7f7388fd9efb862e. May 13 00:03:40.308492 kubelet[1775]: I0513 00:03:40.308451 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec" May 13 00:03:40.309293 containerd[1467]: time="2025-05-13T00:03:40.309233628Z" level=info msg="StopPodSandbox for \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\"" May 13 00:03:40.309717 containerd[1467]: time="2025-05-13T00:03:40.309391468Z" level=info msg="Ensure that sandbox 536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec in task-service has been cleanup successfully" May 13 00:03:40.309717 containerd[1467]: time="2025-05-13T00:03:40.309623877Z" level=info msg="TearDown network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\" successfully" May 13 00:03:40.309717 containerd[1467]: time="2025-05-13T00:03:40.309638806Z" level=info msg="StopPodSandbox for \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\" returns successfully" May 13 00:03:40.310185 containerd[1467]: time="2025-05-13T00:03:40.309885187Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\"" May 13 00:03:40.310185 containerd[1467]: time="2025-05-13T00:03:40.309966383Z" level=info msg="TearDown network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" successfully" May 13 00:03:40.310185 containerd[1467]: time="2025-05-13T00:03:40.309989176Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" returns successfully" May 13 00:03:40.310473 containerd[1467]: time="2025-05-13T00:03:40.310418426Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\"" May 13 00:03:40.310594 containerd[1467]: time="2025-05-13T00:03:40.310563293Z" level=info msg="TearDown network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" successfully" May 13 00:03:40.312263 containerd[1467]: time="2025-05-13T00:03:40.310758018Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" returns successfully" May 13 00:03:40.312263 containerd[1467]: time="2025-05-13T00:03:40.311169783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:3,}" May 13 00:03:40.330694 containerd[1467]: time="2025-05-13T00:03:40.329078525Z" level=info msg="StartContainer for \"c6ee98e84adbc02e1ce03144d23a93bc7d3223292df95bfe7f7388fd9efb862e\" returns successfully" May 13 00:03:40.363321 containerd[1467]: time="2025-05-13T00:03:40.363259045Z" level=error msg="Failed to destroy network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:40.363651 containerd[1467]: time="2025-05-13T00:03:40.363612609Z" level=error msg="encountered an error cleaning up failed sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:40.363710 containerd[1467]: time="2025-05-13T00:03:40.363676759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:40.363933 kubelet[1775]: E0513 00:03:40.363893 1775 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:03:40.363989 kubelet[1775]: E0513 00:03:40.363950 1775 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:40.363989 kubelet[1775]: E0513 00:03:40.363970 1775 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b6gpp" May 13 00:03:40.364045 kubelet[1775]: E0513 00:03:40.364007 1775 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b6gpp_calico-system(b8cf1bac-d87a-4a81-a025-a39d077da472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b6gpp" podUID="b8cf1bac-d87a-4a81-a025-a39d077da472" May 13 00:03:40.433048 kubelet[1775]: I0513 00:03:40.431585 1775 topology_manager.go:215] "Topology Admit Handler" podUID="dec3344c-9e25-4986-bf9d-651f3093cbde" podNamespace="default" podName="nginx-deployment-85f456d6dd-47jff" May 13 00:03:40.438003 systemd[1]: Created slice kubepods-besteffort-poddec3344c_9e25_4986_bf9d_651f3093cbde.slice - libcontainer container kubepods-besteffort-poddec3344c_9e25_4986_bf9d_651f3093cbde.slice. May 13 00:03:40.442519 kubelet[1775]: I0513 00:03:40.442476 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmpn5\" (UniqueName: \"kubernetes.io/projected/dec3344c-9e25-4986-bf9d-651f3093cbde-kube-api-access-rmpn5\") pod \"nginx-deployment-85f456d6dd-47jff\" (UID: \"dec3344c-9e25-4986-bf9d-651f3093cbde\") " pod="default/nginx-deployment-85f456d6dd-47jff" May 13 00:03:40.504303 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:03:40.504410 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:03:40.741922 containerd[1467]: time="2025-05-13T00:03:40.741779914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-47jff,Uid:dec3344c-9e25-4986-bf9d-651f3093cbde,Namespace:default,Attempt:0,}" May 13 00:03:40.890969 systemd-networkd[1394]: cali6710b319b34: Link UP May 13 00:03:40.891122 systemd-networkd[1394]: cali6710b319b34: Gained carrier May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.771 [INFO][2492] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.786 [INFO][2492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0 nginx-deployment-85f456d6dd- default dec3344c-9e25-4986-bf9d-651f3093cbde 995 0 2025-05-13 00:03:40 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.131 nginx-deployment-85f456d6dd-47jff eth0 default [] [] [kns.default ksa.default.default] cali6710b319b34 [] []}} ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.786 [INFO][2492] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.849 [INFO][2507] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" HandleID="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Workload="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.861 [INFO][2507] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" HandleID="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Workload="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034eae0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"nginx-deployment-85f456d6dd-47jff", "timestamp":"2025-05-13 00:03:40.849261423 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.861 [INFO][2507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.861 [INFO][2507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.861 [INFO][2507] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.862 [INFO][2507] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.866 [INFO][2507] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.870 [INFO][2507] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.872 [INFO][2507] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.874 [INFO][2507] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.874 [INFO][2507] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.875 [INFO][2507] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888 May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.879 [INFO][2507] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.884 [INFO][2507] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.1/26] block=192.168.18.0/26 handle="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.884 [INFO][2507] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.1/26] handle="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" host="10.0.0.131" May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.884 [INFO][2507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:03:40.899798 containerd[1467]: 2025-05-13 00:03:40.884 [INFO][2507] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.1/26] IPv6=[] ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" HandleID="k8s-pod-network.d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Workload="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.886 [INFO][2492] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"dec3344c-9e25-4986-bf9d-651f3093cbde", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-47jff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6710b319b34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.886 [INFO][2492] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.1/32] ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.886 [INFO][2492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6710b319b34 ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.891 [INFO][2492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.891 [INFO][2492] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"dec3344c-9e25-4986-bf9d-651f3093cbde", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888", Pod:"nginx-deployment-85f456d6dd-47jff", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali6710b319b34", MAC:"f2:63:90:f8:12:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:40.900398 containerd[1467]: 2025-05-13 00:03:40.897 [INFO][2492] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888" Namespace="default" Pod="nginx-deployment-85f456d6dd-47jff" WorkloadEndpoint="10.0.0.131-k8s-nginx--deployment--85f456d6dd--47jff-eth0" May 13 00:03:40.916462 containerd[1467]: time="2025-05-13T00:03:40.916360901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:40.916462 containerd[1467]: time="2025-05-13T00:03:40.916440061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:40.916462 containerd[1467]: time="2025-05-13T00:03:40.916456707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:40.917002 containerd[1467]: time="2025-05-13T00:03:40.916930028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:40.936333 systemd[1]: Started cri-containerd-d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888.scope - libcontainer container d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888. May 13 00:03:40.945793 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:03:40.963812 containerd[1467]: time="2025-05-13T00:03:40.963762744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-47jff,Uid:dec3344c-9e25-4986-bf9d-651f3093cbde,Namespace:default,Attempt:0,} returns sandbox id \"d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888\"" May 13 00:03:40.965416 containerd[1467]: time="2025-05-13T00:03:40.965326494Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:03:41.065204 kubelet[1775]: E0513 00:03:41.064293 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:41.094305 systemd[1]: run-netns-cni\x2dac8c68ca\x2d3e87\x2ddfc0\x2d10d1\x2d8613f0e4d90d.mount: Deactivated successfully. May 13 00:03:41.316333 kubelet[1775]: E0513 00:03:41.316220 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:41.318436 kubelet[1775]: I0513 00:03:41.318410 1775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6" May 13 00:03:41.319123 containerd[1467]: time="2025-05-13T00:03:41.318853589Z" level=info msg="StopPodSandbox for \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\"" May 13 00:03:41.319123 containerd[1467]: time="2025-05-13T00:03:41.319004742Z" level=info msg="Ensure that sandbox 6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6 in task-service has been cleanup successfully" May 13 00:03:41.319668 containerd[1467]: time="2025-05-13T00:03:41.319563560Z" level=info msg="TearDown network for sandbox \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\" successfully" May 13 00:03:41.319668 containerd[1467]: time="2025-05-13T00:03:41.319593822Z" level=info msg="StopPodSandbox for \"6b3a67d0151f61b6cc1afc46e4148906a07ee3f03eaa46372d31ca721f8054b6\" returns successfully" May 13 00:03:41.319950 containerd[1467]: time="2025-05-13T00:03:41.319932978Z" level=info msg="StopPodSandbox for \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\"" May 13 00:03:41.320155 containerd[1467]: time="2025-05-13T00:03:41.320079300Z" level=info msg="TearDown network for sandbox \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\" successfully" May 13 00:03:41.320155 containerd[1467]: time="2025-05-13T00:03:41.320093114Z" level=info msg="StopPodSandbox for \"536faf3793eecb9e76b1ab9afdae33ce4d504f53ccce127e5048f55d89730eec\" returns successfully" May 13 00:03:41.320404 containerd[1467]: time="2025-05-13T00:03:41.320382763Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\"" May 13 00:03:41.320473 containerd[1467]: time="2025-05-13T00:03:41.320459218Z" level=info msg="TearDown network for sandbox \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" successfully" May 13 00:03:41.320500 containerd[1467]: time="2025-05-13T00:03:41.320473072Z" level=info msg="StopPodSandbox for \"0554a6e5e42918fa7a02de66d180cd452121a6660af1fdadb3a59ed3afe8298e\" returns successfully" May 13 00:03:41.320892 containerd[1467]: time="2025-05-13T00:03:41.320757052Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\"" May 13 00:03:41.320892 containerd[1467]: time="2025-05-13T00:03:41.320827319Z" level=info msg="TearDown network for sandbox \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" successfully" May 13 00:03:41.320892 containerd[1467]: time="2025-05-13T00:03:41.320836541Z" level=info msg="StopPodSandbox for \"cacf8b5af9ce172df760defa82ced408d76b6051d23892a6f06deb1a4ae9a837\" returns successfully" May 13 00:03:41.321171 systemd[1]: run-netns-cni\x2dd0f953ee\x2df6eb\x2d613c\x2d68c3\x2d51dcfc73570a.mount: Deactivated successfully. May 13 00:03:41.321619 containerd[1467]: time="2025-05-13T00:03:41.321197535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:4,}" May 13 00:03:41.335936 kubelet[1775]: I0513 00:03:41.335879 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xpxjn" podStartSLOduration=4.24722496 podStartE2EDuration="12.33586419s" podCreationTimestamp="2025-05-13 00:03:29 +0000 UTC" firstStartedPulling="2025-05-13 00:03:32.167833976 +0000 UTC m=+4.522763271" lastFinishedPulling="2025-05-13 00:03:40.256473206 +0000 UTC m=+12.611402501" observedRunningTime="2025-05-13 00:03:41.335352482 +0000 UTC m=+13.690281777" watchObservedRunningTime="2025-05-13 00:03:41.33586419 +0000 UTC m=+13.690793485" May 13 00:03:41.451241 systemd-networkd[1394]: calic429b6d591d: Link UP May 13 00:03:41.452108 systemd-networkd[1394]: calic429b6d591d: Gained carrier May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.346 [INFO][2575] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.360 [INFO][2575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-csi--node--driver--b6gpp-eth0 csi-node-driver- calico-system b8cf1bac-d87a-4a81-a025-a39d077da472 763 0 2025-05-13 00:03:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.131 csi-node-driver-b6gpp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic429b6d591d [] []}} ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.361 [INFO][2575] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.386 [INFO][2589] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" HandleID="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Workload="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.409 [INFO][2589] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" HandleID="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Workload="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dadd0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.131", "pod":"csi-node-driver-b6gpp", "timestamp":"2025-05-13 00:03:41.386768354 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.410 [INFO][2589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.410 [INFO][2589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.410 [INFO][2589] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.411 [INFO][2589] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.416 [INFO][2589] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.423 [INFO][2589] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.427 [INFO][2589] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.429 [INFO][2589] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.429 [INFO][2589] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.432 [INFO][2589] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7 May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.439 [INFO][2589] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.447 [INFO][2589] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.2/26] block=192.168.18.0/26 handle="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.447 [INFO][2589] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.2/26] handle="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" host="10.0.0.131" May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.447 [INFO][2589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:03:41.465540 containerd[1467]: 2025-05-13 00:03:41.447 [INFO][2589] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.2/26] IPv6=[] ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" HandleID="k8s-pod-network.4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Workload="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.449 [INFO][2575] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-csi--node--driver--b6gpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8cf1bac-d87a-4a81-a025-a39d077da472", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"csi-node-driver-b6gpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic429b6d591d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.449 [INFO][2575] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.2/32] ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.449 [INFO][2575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic429b6d591d ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.451 [INFO][2575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.451 [INFO][2575] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-csi--node--driver--b6gpp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b8cf1bac-d87a-4a81-a025-a39d077da472", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7", Pod:"csi-node-driver-b6gpp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic429b6d591d", MAC:"ea:26:64:9a:5d:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:41.466326 containerd[1467]: 2025-05-13 00:03:41.462 [INFO][2575] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7" Namespace="calico-system" Pod="csi-node-driver-b6gpp" WorkloadEndpoint="10.0.0.131-k8s-csi--node--driver--b6gpp-eth0" May 13 00:03:41.481727 containerd[1467]: time="2025-05-13T00:03:41.481556103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:41.481727 containerd[1467]: time="2025-05-13T00:03:41.481608044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:41.481727 containerd[1467]: time="2025-05-13T00:03:41.481619542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:41.481727 containerd[1467]: time="2025-05-13T00:03:41.481694599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:41.499343 systemd[1]: Started cri-containerd-4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7.scope - libcontainer container 4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7. May 13 00:03:41.508384 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:03:41.517393 containerd[1467]: time="2025-05-13T00:03:41.517339676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b6gpp,Uid:b8cf1bac-d87a-4a81-a025-a39d077da472,Namespace:calico-system,Attempt:4,} returns sandbox id \"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7\"" May 13 00:03:42.064853 kubelet[1775]: E0513 00:03:42.064812 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:42.324673 kubelet[1775]: I0513 00:03:42.324567 1775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:03:42.325446 kubelet[1775]: E0513 00:03:42.325322 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:42.517283 systemd-networkd[1394]: cali6710b319b34: Gained IPv6LL May 13 00:03:42.639807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749350315.mount: Deactivated successfully. May 13 00:03:42.709245 systemd-networkd[1394]: calic429b6d591d: Gained IPv6LL May 13 00:03:43.066289 kubelet[1775]: E0513 00:03:43.065699 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:43.603657 containerd[1467]: time="2025-05-13T00:03:43.603602923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:43.604188 containerd[1467]: time="2025-05-13T00:03:43.604132559Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 00:03:43.614241 containerd[1467]: time="2025-05-13T00:03:43.614179625Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:43.617057 containerd[1467]: time="2025-05-13T00:03:43.617008303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:43.618007 containerd[1467]: time="2025-05-13T00:03:43.617938949Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.652319685s" May 13 00:03:43.618007 containerd[1467]: time="2025-05-13T00:03:43.617967501Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:03:43.626234 containerd[1467]: time="2025-05-13T00:03:43.626106473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:03:43.627423 containerd[1467]: time="2025-05-13T00:03:43.627291535Z" level=info msg="CreateContainer within sandbox \"d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:03:43.646767 containerd[1467]: time="2025-05-13T00:03:43.646712392Z" level=info msg="CreateContainer within sandbox \"d8180f61f058f0d7c5efa17c6b7572f8f64cfc6cecfba211ddab21e0f6916888\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"968791192f59fc2546fd1e384235cf4ef6748bee6a2d2ce080cc0e1e000e0482\"" May 13 00:03:43.647343 containerd[1467]: time="2025-05-13T00:03:43.647301888Z" level=info msg="StartContainer for \"968791192f59fc2546fd1e384235cf4ef6748bee6a2d2ce080cc0e1e000e0482\"" May 13 00:03:43.728335 systemd[1]: Started cri-containerd-968791192f59fc2546fd1e384235cf4ef6748bee6a2d2ce080cc0e1e000e0482.scope - libcontainer container 968791192f59fc2546fd1e384235cf4ef6748bee6a2d2ce080cc0e1e000e0482. May 13 00:03:43.776944 containerd[1467]: time="2025-05-13T00:03:43.776072506Z" level=info msg="StartContainer for \"968791192f59fc2546fd1e384235cf4ef6748bee6a2d2ce080cc0e1e000e0482\" returns successfully" May 13 00:03:44.066609 kubelet[1775]: E0513 00:03:44.066490 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:44.485421 containerd[1467]: time="2025-05-13T00:03:44.484845925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:44.485421 containerd[1467]: time="2025-05-13T00:03:44.485342108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:03:44.486531 containerd[1467]: time="2025-05-13T00:03:44.486480926Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:44.488672 containerd[1467]: time="2025-05-13T00:03:44.488321086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:44.489132 containerd[1467]: time="2025-05-13T00:03:44.489100347Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 862.955416ms" May 13 00:03:44.489241 containerd[1467]: time="2025-05-13T00:03:44.489133934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:03:44.496622 containerd[1467]: time="2025-05-13T00:03:44.496590145Z" level=info msg="CreateContainer within sandbox \"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:03:44.506777 containerd[1467]: time="2025-05-13T00:03:44.506734469Z" level=info msg="CreateContainer within sandbox \"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"eccee0805f41722668b76129fa43f3afccfa7ba779f5686c551fdaf762331980\"" May 13 00:03:44.507350 containerd[1467]: time="2025-05-13T00:03:44.507309729Z" level=info msg="StartContainer for \"eccee0805f41722668b76129fa43f3afccfa7ba779f5686c551fdaf762331980\"" May 13 00:03:44.529288 systemd[1]: Started cri-containerd-eccee0805f41722668b76129fa43f3afccfa7ba779f5686c551fdaf762331980.scope - libcontainer container eccee0805f41722668b76129fa43f3afccfa7ba779f5686c551fdaf762331980. May 13 00:03:44.561424 containerd[1467]: time="2025-05-13T00:03:44.561309460Z" level=info msg="StartContainer for \"eccee0805f41722668b76129fa43f3afccfa7ba779f5686c551fdaf762331980\" returns successfully" May 13 00:03:44.564468 containerd[1467]: time="2025-05-13T00:03:44.564364799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:03:45.066670 kubelet[1775]: E0513 00:03:45.066625 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:45.534864 containerd[1467]: time="2025-05-13T00:03:45.534747073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:45.535871 containerd[1467]: time="2025-05-13T00:03:45.535766178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:03:45.536529 containerd[1467]: time="2025-05-13T00:03:45.536456645Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:45.539188 containerd[1467]: time="2025-05-13T00:03:45.538955100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:45.547077 containerd[1467]: time="2025-05-13T00:03:45.547017352Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 975.681983ms" May 13 00:03:45.547077 containerd[1467]: time="2025-05-13T00:03:45.547074309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:03:45.551667 containerd[1467]: time="2025-05-13T00:03:45.551595756Z" level=info msg="CreateContainer within sandbox \"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:03:45.567970 containerd[1467]: time="2025-05-13T00:03:45.567922124Z" level=info msg="CreateContainer within sandbox \"4236371613fbe6d453910e50ab54eeff880e4687d7b78c8a843ee74501a880c7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3bc6a2594a6aae2d4092cbbb7845d4326d9d57082ec77e15e1b340a8f4266972\"" May 13 00:03:45.568396 containerd[1467]: time="2025-05-13T00:03:45.568369748Z" level=info msg="StartContainer for \"3bc6a2594a6aae2d4092cbbb7845d4326d9d57082ec77e15e1b340a8f4266972\"" May 13 00:03:45.605343 systemd[1]: Started cri-containerd-3bc6a2594a6aae2d4092cbbb7845d4326d9d57082ec77e15e1b340a8f4266972.scope - libcontainer container 3bc6a2594a6aae2d4092cbbb7845d4326d9d57082ec77e15e1b340a8f4266972. May 13 00:03:45.709744 containerd[1467]: time="2025-05-13T00:03:45.709608746Z" level=info msg="StartContainer for \"3bc6a2594a6aae2d4092cbbb7845d4326d9d57082ec77e15e1b340a8f4266972\" returns successfully" May 13 00:03:46.067500 kubelet[1775]: E0513 00:03:46.067452 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:46.312334 kubelet[1775]: I0513 00:03:46.312233 1775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:03:46.312334 kubelet[1775]: I0513 00:03:46.312267 1775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:03:46.575679 kubelet[1775]: I0513 00:03:46.575624 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-47jff" podStartSLOduration=3.914778364 podStartE2EDuration="6.575607244s" podCreationTimestamp="2025-05-13 00:03:40 +0000 UTC" firstStartedPulling="2025-05-13 00:03:40.965055324 +0000 UTC m=+13.319984619" lastFinishedPulling="2025-05-13 00:03:43.625884164 +0000 UTC m=+15.980813499" observedRunningTime="2025-05-13 00:03:44.564366796 +0000 UTC m=+16.919296091" watchObservedRunningTime="2025-05-13 00:03:46.575607244 +0000 UTC m=+18.930536539" May 13 00:03:47.068124 kubelet[1775]: E0513 00:03:47.068007 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:48.068567 kubelet[1775]: E0513 00:03:48.068515 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:48.134616 kubelet[1775]: I0513 00:03:48.134557 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b6gpp" podStartSLOduration=15.105442869 podStartE2EDuration="19.134539352s" podCreationTimestamp="2025-05-13 00:03:29 +0000 UTC" firstStartedPulling="2025-05-13 00:03:41.518618686 +0000 UTC m=+13.873547981" lastFinishedPulling="2025-05-13 00:03:45.547715169 +0000 UTC m=+17.902644464" observedRunningTime="2025-05-13 00:03:46.576037772 +0000 UTC m=+18.930967067" watchObservedRunningTime="2025-05-13 00:03:48.134539352 +0000 UTC m=+20.489468647" May 13 00:03:48.134787 kubelet[1775]: I0513 00:03:48.134772 1775 topology_manager.go:215] "Topology Admit Handler" podUID="5adc5e95-9193-4c51-88be-a69fb8829623" podNamespace="default" podName="nfs-server-provisioner-0" May 13 00:03:48.141906 systemd[1]: Created slice kubepods-besteffort-pod5adc5e95_9193_4c51_88be_a69fb8829623.slice - libcontainer container kubepods-besteffort-pod5adc5e95_9193_4c51_88be_a69fb8829623.slice. May 13 00:03:48.185432 kubelet[1775]: I0513 00:03:48.185351 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5adc5e95-9193-4c51-88be-a69fb8829623-data\") pod \"nfs-server-provisioner-0\" (UID: \"5adc5e95-9193-4c51-88be-a69fb8829623\") " pod="default/nfs-server-provisioner-0" May 13 00:03:48.185432 kubelet[1775]: I0513 00:03:48.185403 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr99n\" (UniqueName: \"kubernetes.io/projected/5adc5e95-9193-4c51-88be-a69fb8829623-kube-api-access-zr99n\") pod \"nfs-server-provisioner-0\" (UID: \"5adc5e95-9193-4c51-88be-a69fb8829623\") " pod="default/nfs-server-provisioner-0" May 13 00:03:48.446357 containerd[1467]: time="2025-05-13T00:03:48.445573113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5adc5e95-9193-4c51-88be-a69fb8829623,Namespace:default,Attempt:0,}" May 13 00:03:48.593635 systemd-networkd[1394]: cali60e51b789ff: Link UP May 13 00:03:48.593787 systemd-networkd[1394]: cali60e51b789ff: Gained carrier May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.489 [INFO][3072] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.506 [INFO][3072] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 5adc5e95-9193-4c51-88be-a69fb8829623 1093 0 2025-05-13 00:03:48 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.131 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.506 [INFO][3072] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.533 [INFO][3086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" HandleID="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.548 [INFO][3086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" HandleID="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3330), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-13 00:03:48.533582555 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.548 [INFO][3086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.548 [INFO][3086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.548 [INFO][3086] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.551 [INFO][3086] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.556 [INFO][3086] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.564 [INFO][3086] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.570 [INFO][3086] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.573 [INFO][3086] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.573 [INFO][3086] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.576 [INFO][3086] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899 May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.582 [INFO][3086] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.588 [INFO][3086] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.3/26] block=192.168.18.0/26 handle="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.588 [INFO][3086] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.3/26] handle="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" host="10.0.0.131" May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.588 [INFO][3086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:03:48.607752 containerd[1467]: 2025-05-13 00:03:48.588 [INFO][3086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.3/26] IPv6=[] ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" HandleID="k8s-pod-network.80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Workload="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.608456 containerd[1467]: 2025-05-13 00:03:48.590 [INFO][3072] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5adc5e95-9193-4c51-88be-a69fb8829623", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:48.608456 containerd[1467]: 2025-05-13 00:03:48.590 [INFO][3072] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.3/32] ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.608456 containerd[1467]: 2025-05-13 00:03:48.590 [INFO][3072] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.608456 containerd[1467]: 2025-05-13 00:03:48.592 [INFO][3072] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.608644 containerd[1467]: 2025-05-13 00:03:48.592 [INFO][3072] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"5adc5e95-9193-4c51-88be-a69fb8829623", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0e:71:63:17:17:48", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:03:48.608644 containerd[1467]: 2025-05-13 00:03:48.606 [INFO][3072] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.131-k8s-nfs--server--provisioner--0-eth0" May 13 00:03:48.633918 containerd[1467]: time="2025-05-13T00:03:48.633801082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:03:48.633918 containerd[1467]: time="2025-05-13T00:03:48.633859252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:03:48.633918 containerd[1467]: time="2025-05-13T00:03:48.633876831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:48.634191 containerd[1467]: time="2025-05-13T00:03:48.634042591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:03:48.658357 systemd[1]: Started cri-containerd-80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899.scope - libcontainer container 80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899. May 13 00:03:48.679710 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:03:48.757054 containerd[1467]: time="2025-05-13T00:03:48.756915301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5adc5e95-9193-4c51-88be-a69fb8829623,Namespace:default,Attempt:0,} returns sandbox id \"80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899\"" May 13 00:03:48.758805 containerd[1467]: time="2025-05-13T00:03:48.758754798Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:03:49.054290 kubelet[1775]: E0513 00:03:49.054165 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:49.069547 kubelet[1775]: E0513 00:03:49.069505 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:50.070634 kubelet[1775]: E0513 00:03:50.070582 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:50.180182 kernel: bpftool[3202]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:03:50.263863 systemd-networkd[1394]: cali60e51b789ff: Gained IPv6LL May 13 00:03:50.386992 systemd-networkd[1394]: vxlan.calico: Link UP May 13 00:03:50.387001 systemd-networkd[1394]: vxlan.calico: Gained carrier May 13 00:03:51.071638 kubelet[1775]: E0513 00:03:51.071415 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:51.283542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007793195.mount: Deactivated successfully. May 13 00:03:52.071853 kubelet[1775]: E0513 00:03:52.071816 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:52.098018 kubelet[1775]: I0513 00:03:52.097964 1775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:03:52.098757 kubelet[1775]: E0513 00:03:52.098732 1775 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:03:52.182278 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL May 13 00:03:52.801348 containerd[1467]: time="2025-05-13T00:03:52.801289052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:52.802071 containerd[1467]: time="2025-05-13T00:03:52.802017812Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 00:03:52.802798 containerd[1467]: time="2025-05-13T00:03:52.802759600Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:52.810452 containerd[1467]: time="2025-05-13T00:03:52.810417413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:03:52.811612 containerd[1467]: time="2025-05-13T00:03:52.811485257Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 4.0526933s" May 13 00:03:52.811612 containerd[1467]: time="2025-05-13T00:03:52.811522462Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 00:03:52.820067 containerd[1467]: time="2025-05-13T00:03:52.820024127Z" level=info msg="CreateContainer within sandbox \"80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:03:52.836629 containerd[1467]: time="2025-05-13T00:03:52.836451716Z" level=info msg="CreateContainer within sandbox \"80e813e143b147377913b9f7cd31e3cd5ba7568aaf2468989114c9973e24b899\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2371e8507d0c53c550edf8301dffe5941b7a5ce94eb7a18b61b2ae0fb15dfede\"" May 13 00:03:52.837990 containerd[1467]: time="2025-05-13T00:03:52.837047120Z" level=info msg="StartContainer for \"2371e8507d0c53c550edf8301dffe5941b7a5ce94eb7a18b61b2ae0fb15dfede\"" May 13 00:03:52.874341 systemd[1]: Started cri-containerd-2371e8507d0c53c550edf8301dffe5941b7a5ce94eb7a18b61b2ae0fb15dfede.scope - libcontainer container 2371e8507d0c53c550edf8301dffe5941b7a5ce94eb7a18b61b2ae0fb15dfede. May 13 00:03:52.925496 containerd[1467]: time="2025-05-13T00:03:52.925431035Z" level=info msg="StartContainer for \"2371e8507d0c53c550edf8301dffe5941b7a5ce94eb7a18b61b2ae0fb15dfede\" returns successfully" May 13 00:03:53.072076 kubelet[1775]: E0513 00:03:53.071962 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:53.599692 kubelet[1775]: I0513 00:03:53.599545 1775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.545257254 podStartE2EDuration="5.599529494s" podCreationTimestamp="2025-05-13 00:03:48 +0000 UTC" firstStartedPulling="2025-05-13 00:03:48.758246572 +0000 UTC m=+21.113175867" lastFinishedPulling="2025-05-13 00:03:52.812518812 +0000 UTC m=+25.167448107" observedRunningTime="2025-05-13 00:03:53.599242105 +0000 UTC m=+25.954171400" watchObservedRunningTime="2025-05-13 00:03:53.599529494 +0000 UTC m=+25.954458789" May 13 00:03:54.072701 kubelet[1775]: E0513 00:03:54.072568 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:55.073576 kubelet[1775]: E0513 00:03:55.073528 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:56.074686 kubelet[1775]: E0513 00:03:56.074636 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:57.075806 kubelet[1775]: E0513 00:03:57.075759 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:58.076760 kubelet[1775]: E0513 00:03:58.076704 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:03:59.077424 kubelet[1775]: E0513 00:03:59.077374 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:00.077999 kubelet[1775]: E0513 00:04:00.077954 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:01.079106 kubelet[1775]: E0513 00:04:01.078563 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:02.079017 kubelet[1775]: E0513 00:04:02.078952 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:03.047142 kubelet[1775]: I0513 00:04:03.047094 1775 topology_manager.go:215] "Topology Admit Handler" podUID="5fdae09a-c7cd-4b7b-b362-00a765f0f5b8" podNamespace="default" podName="test-pod-1" May 13 00:04:03.054966 systemd[1]: Created slice kubepods-besteffort-pod5fdae09a_c7cd_4b7b_b362_00a765f0f5b8.slice - libcontainer container kubepods-besteffort-pod5fdae09a_c7cd_4b7b_b362_00a765f0f5b8.slice. May 13 00:04:03.075659 kubelet[1775]: I0513 00:04:03.075625 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-185d70be-c28a-4bde-9c4a-7bf5a77e58b7\" (UniqueName: \"kubernetes.io/nfs/5fdae09a-c7cd-4b7b-b362-00a765f0f5b8-pvc-185d70be-c28a-4bde-9c4a-7bf5a77e58b7\") pod \"test-pod-1\" (UID: \"5fdae09a-c7cd-4b7b-b362-00a765f0f5b8\") " pod="default/test-pod-1" May 13 00:04:03.075765 kubelet[1775]: I0513 00:04:03.075665 1775 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2l9l\" (UniqueName: \"kubernetes.io/projected/5fdae09a-c7cd-4b7b-b362-00a765f0f5b8-kube-api-access-j2l9l\") pod \"test-pod-1\" (UID: \"5fdae09a-c7cd-4b7b-b362-00a765f0f5b8\") " pod="default/test-pod-1" May 13 00:04:03.079755 kubelet[1775]: E0513 00:04:03.079720 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:03.202187 kernel: FS-Cache: Loaded May 13 00:04:03.231930 kernel: RPC: Registered named UNIX socket transport module. May 13 00:04:03.232030 kernel: RPC: Registered udp transport module. May 13 00:04:03.232049 kernel: RPC: Registered tcp transport module. May 13 00:04:03.233234 kernel: RPC: Registered tcp-with-tls transport module. May 13 00:04:03.234173 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:04:03.415432 kernel: NFS: Registering the id_resolver key type May 13 00:04:03.415549 kernel: Key type id_resolver registered May 13 00:04:03.415586 kernel: Key type id_legacy registered May 13 00:04:03.444610 nfsidmap[3497]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:04:03.448209 nfsidmap[3500]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:04:03.658527 containerd[1467]: time="2025-05-13T00:04:03.658480770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5fdae09a-c7cd-4b7b-b362-00a765f0f5b8,Namespace:default,Attempt:0,}" May 13 00:04:03.799218 systemd-networkd[1394]: cali5ec59c6bf6e: Link UP May 13 00:04:03.800033 systemd-networkd[1394]: cali5ec59c6bf6e: Gained carrier May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.720 [INFO][3503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.131-k8s-test--pod--1-eth0 default 5fdae09a-c7cd-4b7b-b362-00a765f0f5b8 1168 0 2025-05-13 00:03:48 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.131 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.720 [INFO][3503] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.755 [INFO][3518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" HandleID="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Workload="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.767 [INFO][3518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" HandleID="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Workload="10.0.0.131-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c650), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.131", "pod":"test-pod-1", "timestamp":"2025-05-13 00:04:03.755410713 +0000 UTC"}, Hostname:"10.0.0.131", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.767 [INFO][3518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.767 [INFO][3518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.767 [INFO][3518] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.131' May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.769 [INFO][3518] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.774 [INFO][3518] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.778 [INFO][3518] ipam/ipam.go 489: Trying affinity for 192.168.18.0/26 host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.780 [INFO][3518] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.782 [INFO][3518] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.0/26 host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.783 [INFO][3518] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.0/26 handle="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.784 [INFO][3518] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0 May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.789 [INFO][3518] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.0/26 handle="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.795 [INFO][3518] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.4/26] block=192.168.18.0/26 handle="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.795 [INFO][3518] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.4/26] handle="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" host="10.0.0.131" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.795 [INFO][3518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.795 [INFO][3518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.4/26] IPv6=[] ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" HandleID="k8s-pod-network.ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Workload="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.808623 containerd[1467]: 2025-05-13 00:04:03.797 [INFO][3503] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5fdae09a-c7cd-4b7b-b362-00a765f0f5b8", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:04:03.809855 containerd[1467]: 2025-05-13 00:04:03.797 [INFO][3503] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.4/32] ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.809855 containerd[1467]: 2025-05-13 00:04:03.797 [INFO][3503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.809855 containerd[1467]: 2025-05-13 00:04:03.799 [INFO][3503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.809855 containerd[1467]: 2025-05-13 00:04:03.800 [INFO][3503] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.131-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5fdae09a-c7cd-4b7b-b362-00a765f0f5b8", ResourceVersion:"1168", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 3, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.131", ContainerID:"ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"2a:f2:09:0b:0d:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:04:03.809855 containerd[1467]: 2025-05-13 00:04:03.806 [INFO][3503] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.131-k8s-test--pod--1-eth0" May 13 00:04:03.849672 containerd[1467]: time="2025-05-13T00:04:03.832667039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:04:03.849672 containerd[1467]: time="2025-05-13T00:04:03.849612307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:04:03.849672 containerd[1467]: time="2025-05-13T00:04:03.849640414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:03.849866 containerd[1467]: time="2025-05-13T00:04:03.849766476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:04:03.870358 systemd[1]: Started cri-containerd-ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0.scope - libcontainer container ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0. May 13 00:04:03.881257 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:04:03.897291 containerd[1467]: time="2025-05-13T00:04:03.897253616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5fdae09a-c7cd-4b7b-b362-00a765f0f5b8,Namespace:default,Attempt:0,} returns sandbox id \"ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0\"" May 13 00:04:03.900541 containerd[1467]: time="2025-05-13T00:04:03.899713648Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:04:04.080675 kubelet[1775]: E0513 00:04:04.080528 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:04.087116 update_engine[1452]: I20250513 00:04:04.087049 1452 update_attempter.cc:509] Updating boot flags... May 13 00:04:04.108431 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3483) May 13 00:04:04.138214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (3490) May 13 00:04:04.205924 containerd[1467]: time="2025-05-13T00:04:04.205876789Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:04:04.206613 containerd[1467]: time="2025-05-13T00:04:04.206475212Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 00:04:04.209922 containerd[1467]: time="2025-05-13T00:04:04.209881467Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 310.129357ms" May 13 00:04:04.209922 containerd[1467]: time="2025-05-13T00:04:04.209920970Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:04:04.212131 containerd[1467]: time="2025-05-13T00:04:04.212081761Z" level=info msg="CreateContainer within sandbox \"ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:04:04.227911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2149686066.mount: Deactivated successfully. May 13 00:04:04.241406 containerd[1467]: time="2025-05-13T00:04:04.241360292Z" level=info msg="CreateContainer within sandbox \"ee997dd1d544ab64cfcc666bbd772c8df6b695947d4e55fe311cdd3b60b623f0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"88a5bc3e9cad72b5f737126c100f2d409f649a947930bf397e928321ae333c35\"" May 13 00:04:04.241964 containerd[1467]: time="2025-05-13T00:04:04.241844924Z" level=info msg="StartContainer for \"88a5bc3e9cad72b5f737126c100f2d409f649a947930bf397e928321ae333c35\"" May 13 00:04:04.269405 systemd[1]: Started cri-containerd-88a5bc3e9cad72b5f737126c100f2d409f649a947930bf397e928321ae333c35.scope - libcontainer container 88a5bc3e9cad72b5f737126c100f2d409f649a947930bf397e928321ae333c35. May 13 00:04:04.296001 containerd[1467]: time="2025-05-13T00:04:04.295889926Z" level=info msg="StartContainer for \"88a5bc3e9cad72b5f737126c100f2d409f649a947930bf397e928321ae333c35\" returns successfully" May 13 00:04:05.081637 kubelet[1775]: E0513 00:04:05.081583 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:05.813359 systemd-networkd[1394]: cali5ec59c6bf6e: Gained IPv6LL May 13 00:04:06.082056 kubelet[1775]: E0513 00:04:06.081937 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:07.083085 kubelet[1775]: E0513 00:04:07.083032 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:08.084224 kubelet[1775]: E0513 00:04:08.084182 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:09.054907 kubelet[1775]: E0513 00:04:09.054862 1775 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:09.085127 kubelet[1775]: E0513 00:04:09.085089 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:10.085260 kubelet[1775]: E0513 00:04:10.085217 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:04:11.086116 kubelet[1775]: E0513 00:04:11.086064 1775 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"