Feb 13 19:17:52.923592 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 19:17:52.923615 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Feb 13 17:39:57 -00 2025 Feb 13 19:17:52.923626 kernel: KASLR enabled Feb 13 19:17:52.923631 kernel: efi: EFI v2.7 by EDK II Feb 13 19:17:52.923637 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 19:17:52.923643 kernel: random: crng init done Feb 13 19:17:52.923650 kernel: secureboot: Secure boot disabled Feb 13 19:17:52.923656 kernel: ACPI: Early table checksum verification disabled Feb 13 19:17:52.923662 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 19:17:52.923669 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:17:52.923675 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923681 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923687 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923692 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923699 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923713 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923719 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923725 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923732 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:17:52.923738 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 19:17:52.923744 kernel: NUMA: Failed to initialise from firmware Feb 13 19:17:52.923750 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:52.923757 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Feb 13 19:17:52.923763 kernel: Zone ranges: Feb 13 19:17:52.923769 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:52.923777 kernel: DMA32 empty Feb 13 19:17:52.923783 kernel: Normal empty Feb 13 19:17:52.923789 kernel: Movable zone start for each node Feb 13 19:17:52.923795 kernel: Early memory node ranges Feb 13 19:17:52.923802 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 19:17:52.923808 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 19:17:52.923815 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 19:17:52.923821 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 19:17:52.923827 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 19:17:52.923833 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 19:17:52.923840 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 19:17:52.923846 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 19:17:52.923854 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 19:17:52.923860 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 19:17:52.923867 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 19:17:52.923876 kernel: psci: probing for conduit method from ACPI. Feb 13 19:17:52.923882 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 19:17:52.923889 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 19:17:52.923897 kernel: psci: Trusted OS migration not required Feb 13 19:17:52.923904 kernel: psci: SMC Calling Convention v1.1 Feb 13 19:17:52.923911 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 19:17:52.923918 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 19:17:52.923924 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 19:17:52.923931 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 19:17:52.923938 kernel: Detected PIPT I-cache on CPU0 Feb 13 19:17:52.923945 kernel: CPU features: detected: GIC system register CPU interface Feb 13 19:17:52.923951 kernel: CPU features: detected: Hardware dirty bit management Feb 13 19:17:52.923958 kernel: CPU features: detected: Spectre-v4 Feb 13 19:17:52.923966 kernel: CPU features: detected: Spectre-BHB Feb 13 19:17:52.923973 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 19:17:52.923980 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 19:17:52.923987 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 19:17:52.923993 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 19:17:52.924000 kernel: alternatives: applying boot alternatives Feb 13 19:17:52.924007 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:17:52.924014 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:17:52.924021 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:17:52.924028 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:17:52.924035 kernel: Fallback order for Node 0: 0 Feb 13 19:17:52.924043 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 19:17:52.924049 kernel: Policy zone: DMA Feb 13 19:17:52.924056 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:17:52.924062 kernel: software IO TLB: area num 4. Feb 13 19:17:52.924069 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 19:17:52.924076 kernel: Memory: 2387536K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 38336K init, 897K bss, 184752K reserved, 0K cma-reserved) Feb 13 19:17:52.924082 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:17:52.924089 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:17:52.924096 kernel: rcu: RCU event tracing is enabled. Feb 13 19:17:52.924103 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:17:52.924110 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:17:52.924116 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:17:52.924125 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:17:52.924131 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:17:52.924138 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 19:17:52.924145 kernel: GICv3: 256 SPIs implemented Feb 13 19:17:52.924151 kernel: GICv3: 0 Extended SPIs implemented Feb 13 19:17:52.924158 kernel: Root IRQ handler: gic_handle_irq Feb 13 19:17:52.924164 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 19:17:52.924171 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 19:17:52.924177 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 19:17:52.924184 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 19:17:52.924190 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 19:17:52.924199 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 19:17:52.924205 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 19:17:52.924212 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:17:52.924219 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:52.924226 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 19:17:52.924232 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 19:17:52.924239 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 19:17:52.924246 kernel: arm-pv: using stolen time PV Feb 13 19:17:52.924253 kernel: Console: colour dummy device 80x25 Feb 13 19:17:52.924260 kernel: ACPI: Core revision 20230628 Feb 13 19:17:52.924267 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 19:17:52.924305 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:17:52.924313 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:17:52.924320 kernel: landlock: Up and running. Feb 13 19:17:52.924327 kernel: SELinux: Initializing. Feb 13 19:17:52.924333 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:17:52.924340 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:17:52.924347 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:17:52.924355 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:17:52.924361 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:17:52.924371 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:17:52.924378 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 19:17:52.924387 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 19:17:52.924400 kernel: Remapping and enabling EFI services. Feb 13 19:17:52.924407 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:17:52.924414 kernel: Detected PIPT I-cache on CPU1 Feb 13 19:17:52.924421 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 19:17:52.924428 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 19:17:52.924435 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:52.924443 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 19:17:52.924450 kernel: Detected PIPT I-cache on CPU2 Feb 13 19:17:52.924462 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 19:17:52.924471 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 19:17:52.924478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:52.924485 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 19:17:52.924492 kernel: Detected PIPT I-cache on CPU3 Feb 13 19:17:52.924499 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 19:17:52.924507 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 19:17:52.924515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 19:17:52.924522 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 19:17:52.924529 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:17:52.924536 kernel: SMP: Total of 4 processors activated. Feb 13 19:17:52.924544 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 19:17:52.924557 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 19:17:52.924564 kernel: CPU features: detected: Common not Private translations Feb 13 19:17:52.924573 kernel: CPU features: detected: CRC32 instructions Feb 13 19:17:52.924582 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 19:17:52.924589 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 19:17:52.924596 kernel: CPU features: detected: LSE atomic instructions Feb 13 19:17:52.924603 kernel: CPU features: detected: Privileged Access Never Feb 13 19:17:52.924610 kernel: CPU features: detected: RAS Extension Support Feb 13 19:17:52.924617 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 19:17:52.924625 kernel: CPU: All CPU(s) started at EL1 Feb 13 19:17:52.924632 kernel: alternatives: applying system-wide alternatives Feb 13 19:17:52.924639 kernel: devtmpfs: initialized Feb 13 19:17:52.924646 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:17:52.924655 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:17:52.924662 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:17:52.924669 kernel: SMBIOS 3.0.0 present. Feb 13 19:17:52.924677 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 19:17:52.924684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:17:52.924691 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 19:17:52.924698 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 19:17:52.924705 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 19:17:52.924714 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:17:52.924721 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Feb 13 19:17:52.924729 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:17:52.924736 kernel: cpuidle: using governor menu Feb 13 19:17:52.924743 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 19:17:52.924750 kernel: ASID allocator initialised with 32768 entries Feb 13 19:17:52.924757 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:17:52.924764 kernel: Serial: AMBA PL011 UART driver Feb 13 19:17:52.924771 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 19:17:52.924780 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 19:17:52.924787 kernel: Modules: 509280 pages in range for PLT usage Feb 13 19:17:52.924794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:17:52.924801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:17:52.924809 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 19:17:52.924816 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 19:17:52.924823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:17:52.924830 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:17:52.924837 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 19:17:52.924846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 19:17:52.924853 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:17:52.924860 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:17:52.924867 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:17:52.924874 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:17:52.924881 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:17:52.924888 kernel: ACPI: Interpreter enabled Feb 13 19:17:52.924896 kernel: ACPI: Using GIC for interrupt routing Feb 13 19:17:52.924903 kernel: ACPI: MCFG table detected, 1 entries Feb 13 19:17:52.924910 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 19:17:52.924918 kernel: printk: console [ttyAMA0] enabled Feb 13 19:17:52.924925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:17:52.925064 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:17:52.925137 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 19:17:52.925202 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 19:17:52.925267 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 19:17:52.925364 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 19:17:52.925379 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 19:17:52.925386 kernel: PCI host bridge to bus 0000:00 Feb 13 19:17:52.925460 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 19:17:52.925523 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 19:17:52.925591 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 19:17:52.925652 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:17:52.925733 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 19:17:52.925816 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:17:52.925885 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 19:17:52.925950 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 19:17:52.926015 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:17:52.926080 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 19:17:52.926145 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 19:17:52.926214 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 19:17:52.926347 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 19:17:52.926457 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 19:17:52.926555 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 19:17:52.926566 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 19:17:52.926574 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 19:17:52.926581 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 19:17:52.926592 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 19:17:52.926608 kernel: iommu: Default domain type: Translated Feb 13 19:17:52.926618 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 19:17:52.926627 kernel: efivars: Registered efivars operations Feb 13 19:17:52.926637 kernel: vgaarb: loaded Feb 13 19:17:52.926645 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 19:17:52.926655 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:17:52.926665 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:17:52.926677 kernel: pnp: PnP ACPI init Feb 13 19:17:52.926792 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 19:17:52.926810 kernel: pnp: PnP ACPI: found 1 devices Feb 13 19:17:52.926821 kernel: NET: Registered PF_INET protocol family Feb 13 19:17:52.926829 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:17:52.926838 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:17:52.926846 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:17:52.926854 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:17:52.926865 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:17:52.926872 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:17:52.926884 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:17:52.926895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:17:52.926902 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:17:52.926913 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:17:52.926921 kernel: kvm [1]: HYP mode not available Feb 13 19:17:52.926930 kernel: Initialise system trusted keyrings Feb 13 19:17:52.926941 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:17:52.926949 kernel: Key type asymmetric registered Feb 13 19:17:52.926957 kernel: Asymmetric key parser 'x509' registered Feb 13 19:17:52.926970 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 19:17:52.926978 kernel: io scheduler mq-deadline registered Feb 13 19:17:52.926986 kernel: io scheduler kyber registered Feb 13 19:17:52.926995 kernel: io scheduler bfq registered Feb 13 19:17:52.927006 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 19:17:52.927014 kernel: ACPI: button: Power Button [PWRB] Feb 13 19:17:52.927023 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 19:17:52.927114 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 19:17:52.927126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:17:52.927135 kernel: thunder_xcv, ver 1.0 Feb 13 19:17:52.927143 kernel: thunder_bgx, ver 1.0 Feb 13 19:17:52.927150 kernel: nicpf, ver 1.0 Feb 13 19:17:52.927156 kernel: nicvf, ver 1.0 Feb 13 19:17:52.927229 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 19:17:52.927317 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T19:17:52 UTC (1739474272) Feb 13 19:17:52.927328 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 19:17:52.927335 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 19:17:52.927342 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 19:17:52.927352 kernel: watchdog: Hard watchdog permanently disabled Feb 13 19:17:52.927359 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:17:52.927366 kernel: Segment Routing with IPv6 Feb 13 19:17:52.927373 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:17:52.927380 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:17:52.927387 kernel: Key type dns_resolver registered Feb 13 19:17:52.927395 kernel: registered taskstats version 1 Feb 13 19:17:52.927402 kernel: Loading compiled-in X.509 certificates Feb 13 19:17:52.927409 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 58bec1a0c6b8a133d1af4ea745973da0351f7027' Feb 13 19:17:52.927418 kernel: Key type .fscrypt registered Feb 13 19:17:52.927425 kernel: Key type fscrypt-provisioning registered Feb 13 19:17:52.927433 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:17:52.927440 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:17:52.927447 kernel: ima: No architecture policies found Feb 13 19:17:52.927454 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 19:17:52.927461 kernel: clk: Disabling unused clocks Feb 13 19:17:52.927468 kernel: Freeing unused kernel memory: 38336K Feb 13 19:17:52.927477 kernel: Run /init as init process Feb 13 19:17:52.927484 kernel: with arguments: Feb 13 19:17:52.927491 kernel: /init Feb 13 19:17:52.927497 kernel: with environment: Feb 13 19:17:52.927504 kernel: HOME=/ Feb 13 19:17:52.927511 kernel: TERM=linux Feb 13 19:17:52.927518 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:17:52.927526 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:17:52.927536 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:17:52.927552 systemd[1]: Detected virtualization kvm. Feb 13 19:17:52.927560 systemd[1]: Detected architecture arm64. Feb 13 19:17:52.927568 systemd[1]: Running in initrd. Feb 13 19:17:52.927576 systemd[1]: No hostname configured, using default hostname. Feb 13 19:17:52.927584 systemd[1]: Hostname set to . Feb 13 19:17:52.927591 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:17:52.927599 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:17:52.927609 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:52.927617 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:52.927625 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:17:52.927633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:52.927641 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:17:52.927650 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:17:52.927658 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:17:52.927668 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:17:52.927675 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:52.927683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:52.927691 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:52.927699 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:52.927706 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:52.927714 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:52.927722 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:52.927729 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:52.927739 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:17:52.927746 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:17:52.927754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:52.927762 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:52.927770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:52.927777 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:52.927785 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:17:52.927793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:52.927803 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:17:52.927811 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:17:52.927818 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:52.927826 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:52.927834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:52.927842 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:52.927851 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:52.927861 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:17:52.927869 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:17:52.927877 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:52.927889 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:52.927921 systemd-journald[238]: Collecting audit messages is disabled. Feb 13 19:17:52.927942 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:52.927951 systemd-journald[238]: Journal started Feb 13 19:17:52.927974 systemd-journald[238]: Runtime Journal (/run/log/journal/ac3b0616d64849b3b2ccd2ef46bd8729) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:17:52.907581 systemd-modules-load[239]: Inserted module 'overlay' Feb 13 19:17:52.934308 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:52.934345 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:17:52.938308 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:52.938347 kernel: Bridge firewalling registered Feb 13 19:17:52.938304 systemd-modules-load[239]: Inserted module 'br_netfilter' Feb 13 19:17:52.939403 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:52.941586 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:52.953692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:52.955607 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:52.958144 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:52.961658 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:17:52.963482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:52.972472 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:52.975317 dracut-cmdline[276]: dracut-dracut-053 Feb 13 19:17:52.980134 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f06bad36699a22ae88c1968cd72b62b3503d97da521712e50a4b744320b1ba33 Feb 13 19:17:52.979453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:53.013013 systemd-resolved[289]: Positive Trust Anchors: Feb 13 19:17:53.013032 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:53.013063 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:53.017878 systemd-resolved[289]: Defaulting to hostname 'linux'. Feb 13 19:17:53.018875 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:53.021978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:53.053312 kernel: SCSI subsystem initialized Feb 13 19:17:53.058292 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:17:53.069313 kernel: iscsi: registered transport (tcp) Feb 13 19:17:53.083301 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:17:53.083324 kernel: QLogic iSCSI HBA Driver Feb 13 19:17:53.122947 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:53.135703 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:17:53.151624 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:17:53.151675 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:17:53.152932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:17:53.200341 kernel: raid6: neonx8 gen() 15795 MB/s Feb 13 19:17:53.217314 kernel: raid6: neonx4 gen() 15820 MB/s Feb 13 19:17:53.234317 kernel: raid6: neonx2 gen() 13325 MB/s Feb 13 19:17:53.251314 kernel: raid6: neonx1 gen() 10557 MB/s Feb 13 19:17:53.268311 kernel: raid6: int64x8 gen() 6783 MB/s Feb 13 19:17:53.285317 kernel: raid6: int64x4 gen() 7352 MB/s Feb 13 19:17:53.302310 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 19:17:53.319316 kernel: raid6: int64x1 gen() 5059 MB/s Feb 13 19:17:53.319361 kernel: raid6: using algorithm neonx4 gen() 15820 MB/s Feb 13 19:17:53.336318 kernel: raid6: .... xor() 12424 MB/s, rmw enabled Feb 13 19:17:53.336357 kernel: raid6: using neon recovery algorithm Feb 13 19:17:53.341306 kernel: xor: measuring software checksum speed Feb 13 19:17:53.341354 kernel: 8regs : 21653 MB/sec Feb 13 19:17:53.341365 kernel: 32regs : 20302 MB/sec Feb 13 19:17:53.342696 kernel: arm64_neon : 27804 MB/sec Feb 13 19:17:53.342724 kernel: xor: using function: arm64_neon (27804 MB/sec) Feb 13 19:17:53.392331 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:17:53.402429 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:53.414475 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:53.427484 systemd-udevd[464]: Using default interface naming scheme 'v255'. Feb 13 19:17:53.431839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:53.434713 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:17:53.451297 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Feb 13 19:17:53.480341 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:53.489473 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:53.529207 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:53.536479 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:17:53.548884 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:53.549808 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:53.551358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:53.554109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:53.560436 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:17:53.571330 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:53.587733 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 19:17:53.600798 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:17:53.600926 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:17:53.600938 kernel: GPT:9289727 != 19775487 Feb 13 19:17:53.600947 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:17:53.600956 kernel: GPT:9289727 != 19775487 Feb 13 19:17:53.600964 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:17:53.600973 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:53.594398 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:53.594510 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:53.600761 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:53.601710 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:53.601890 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:53.605292 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:53.615304 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (523) Feb 13 19:17:53.616128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:53.623476 kernel: BTRFS: device fsid 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (522) Feb 13 19:17:53.626099 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:53.634465 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:17:53.651029 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:17:53.658305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:17:53.664162 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:17:53.665089 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:17:53.679439 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:17:53.681413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:53.685728 disk-uuid[552]: Primary Header is updated. Feb 13 19:17:53.685728 disk-uuid[552]: Secondary Entries is updated. Feb 13 19:17:53.685728 disk-uuid[552]: Secondary Header is updated. Feb 13 19:17:53.688293 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:53.699694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:54.699295 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:17:54.699716 disk-uuid[553]: The operation has completed successfully. Feb 13 19:17:54.736317 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:17:54.736430 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:17:54.764478 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:17:54.769919 sh[575]: Success Feb 13 19:17:54.781342 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 19:17:54.833978 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:17:54.845868 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:17:54.849327 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:17:54.858374 kernel: BTRFS info (device dm-0): first mount of filesystem 4fff035f-dd55-45d8-9bb7-2a61f21b22d5 Feb 13 19:17:54.858428 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:54.858440 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:17:54.859807 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:17:54.859823 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:17:54.864706 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:17:54.865944 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:17:54.878481 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:17:54.879892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:17:54.891793 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:17:54.891846 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:54.891857 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:54.896344 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:54.905970 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:17:54.907465 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:17:54.913693 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:17:54.919679 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:17:54.987223 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:54.999444 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:55.029987 ignition[681]: Ignition 2.20.0 Feb 13 19:17:55.029996 ignition[681]: Stage: fetch-offline Feb 13 19:17:55.030030 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:55.030039 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:55.030482 ignition[681]: parsed url from cmdline: "" Feb 13 19:17:55.030485 ignition[681]: no config URL provided Feb 13 19:17:55.030489 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:17:55.030497 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:17:55.030519 ignition[681]: op(1): [started] loading QEMU firmware config module Feb 13 19:17:55.030523 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:17:55.041080 ignition[681]: op(1): [finished] loading QEMU firmware config module Feb 13 19:17:55.044158 systemd-networkd[766]: lo: Link UP Feb 13 19:17:55.044168 systemd-networkd[766]: lo: Gained carrier Feb 13 19:17:55.045113 systemd-networkd[766]: Enumeration completed Feb 13 19:17:55.045586 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:55.045589 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:17:55.046363 systemd-networkd[766]: eth0: Link UP Feb 13 19:17:55.046366 systemd-networkd[766]: eth0: Gained carrier Feb 13 19:17:55.046373 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:55.053377 ignition[681]: parsing config with SHA512: 29c2cb85a9c3fd2bd2626b959ff2456538d994d007c8747f497fe55f0050819eef03c613526d6995ba383d4ead962de24d4416b20a05bb1d70964c196935f7d1 Feb 13 19:17:55.047170 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:55.050405 systemd[1]: Reached target network.target - Network. Feb 13 19:17:55.056985 unknown[681]: fetched base config from "system" Feb 13 19:17:55.056992 unknown[681]: fetched user config from "qemu" Feb 13 19:17:55.057255 ignition[681]: fetch-offline: fetch-offline passed Feb 13 19:17:55.059181 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:55.057376 ignition[681]: Ignition finished successfully Feb 13 19:17:55.060429 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:17:55.065364 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:17:55.071462 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:17:55.083935 ignition[773]: Ignition 2.20.0 Feb 13 19:17:55.083945 ignition[773]: Stage: kargs Feb 13 19:17:55.084130 ignition[773]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:55.084140 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:55.087056 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:17:55.084885 ignition[773]: kargs: kargs passed Feb 13 19:17:55.084933 ignition[773]: Ignition finished successfully Feb 13 19:17:55.098501 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:17:55.108197 ignition[783]: Ignition 2.20.0 Feb 13 19:17:55.108208 ignition[783]: Stage: disks Feb 13 19:17:55.108383 ignition[783]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:55.108392 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:55.109052 ignition[783]: disks: disks passed Feb 13 19:17:55.109094 ignition[783]: Ignition finished successfully Feb 13 19:17:55.112310 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:17:55.113386 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:55.114758 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:17:55.116592 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:55.118290 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:55.120007 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:55.128457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:17:55.144034 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:17:55.148230 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:17:55.155541 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:17:55.206301 kernel: EXT4-fs (vda9): mounted filesystem 24882d04-b1a5-4a27-95f1-925956e69b18 r/w with ordered data mode. Quota mode: none. Feb 13 19:17:55.206414 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:17:55.207481 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:55.223390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:55.225117 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:17:55.226438 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:17:55.226494 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:17:55.233261 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Feb 13 19:17:55.233300 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:17:55.226523 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:55.237141 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:55.237162 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:55.233222 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:17:55.235816 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:17:55.240730 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:55.241334 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:55.289850 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:17:55.294306 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:17:55.298904 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:17:55.301783 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:17:55.386433 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:55.397380 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:17:55.399647 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:17:55.404312 kernel: BTRFS info (device vda6): last unmount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:17:55.424260 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:17:55.429216 ignition[914]: INFO : Ignition 2.20.0 Feb 13 19:17:55.429216 ignition[914]: INFO : Stage: mount Feb 13 19:17:55.430671 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:55.430671 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:55.430671 ignition[914]: INFO : mount: mount passed Feb 13 19:17:55.430671 ignition[914]: INFO : Ignition finished successfully Feb 13 19:17:55.434326 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:17:55.437045 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:17:55.897937 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:17:55.910480 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:55.916286 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (928) Feb 13 19:17:55.918533 kernel: BTRFS info (device vda6): first mount of filesystem 843e6c1f-b3c4-44a3-b5c6-7983dd77012d Feb 13 19:17:55.918548 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 19:17:55.918557 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:17:55.920286 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:17:55.921484 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:55.940303 ignition[945]: INFO : Ignition 2.20.0 Feb 13 19:17:55.940303 ignition[945]: INFO : Stage: files Feb 13 19:17:55.940303 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:55.940303 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:55.943398 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:17:55.943398 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:17:55.943398 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:17:55.946344 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:17:55.946344 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:17:55.946344 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:17:55.946344 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Feb 13 19:17:55.943969 unknown[945]: wrote ssh authorized keys file for user: core Feb 13 19:17:56.117416 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:17:56.321743 systemd-networkd[766]: eth0: Gained IPv6LL Feb 13 19:17:56.384695 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Feb 13 19:17:56.384695 ignition[945]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:17:56.387422 ignition[945]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:56.387422 ignition[945]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:56.387422 ignition[945]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:17:56.387422 ignition[945]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:56.401217 ignition[945]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:56.404564 ignition[945]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:56.404564 ignition[945]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:56.407269 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:56.407269 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:56.407269 ignition[945]: INFO : files: files passed Feb 13 19:17:56.407269 ignition[945]: INFO : Ignition finished successfully Feb 13 19:17:56.407666 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:17:56.417428 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:17:56.419400 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:17:56.421662 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:17:56.421779 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:17:56.426984 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:17:56.429247 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:56.429247 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:56.431612 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:56.433758 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:56.434867 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:17:56.445469 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:17:56.463291 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:17:56.463478 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:17:56.465073 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:17:56.480681 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:17:56.482248 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:17:56.490429 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:17:56.505506 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:56.507935 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:17:56.518742 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:56.519965 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:56.521759 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:17:56.523233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:17:56.523375 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:56.525449 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:17:56.527194 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:17:56.528629 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:17:56.530108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:56.531809 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:56.533477 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:17:56.535146 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:56.536861 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:17:56.538500 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:17:56.539969 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:17:56.541331 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:17:56.541466 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:56.543417 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:56.545168 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:56.546904 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:17:56.551345 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:56.553311 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:17:56.553441 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:56.555557 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:17:56.555670 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:56.557203 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:17:56.558458 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:17:56.559872 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:56.560870 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:17:56.562558 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:17:56.563747 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:17:56.563831 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:56.564993 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:17:56.565071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:56.566198 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:17:56.566327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:56.567619 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:17:56.567721 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:17:56.579499 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:17:56.580222 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:17:56.580370 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:56.583084 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:17:56.583922 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:17:56.584035 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:56.585599 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:17:56.585725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:56.591305 ignition[1002]: INFO : Ignition 2.20.0 Feb 13 19:17:56.591305 ignition[1002]: INFO : Stage: umount Feb 13 19:17:56.593939 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:56.593939 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:17:56.593939 ignition[1002]: INFO : umount: umount passed Feb 13 19:17:56.593939 ignition[1002]: INFO : Ignition finished successfully Feb 13 19:17:56.594929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:17:56.595449 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:17:56.595548 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:17:56.597840 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:17:56.597928 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:17:56.601020 systemd[1]: Stopped target network.target - Network. Feb 13 19:17:56.602216 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:17:56.602293 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:17:56.603617 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:17:56.603655 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:17:56.604989 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:17:56.605033 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:17:56.606247 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:17:56.606308 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:56.607815 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:17:56.609128 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:17:56.612614 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:17:56.612746 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:17:56.615395 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:17:56.615671 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:17:56.615711 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:56.619097 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:17:56.619339 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:17:56.619428 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:17:56.622144 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:17:56.622205 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:56.630440 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:17:56.631761 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:17:56.631823 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:56.633351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:17:56.633390 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:56.635897 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:17:56.635938 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:56.636920 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:56.641947 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:17:56.642087 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:17:56.643376 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:17:56.643416 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:56.645511 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:17:56.645626 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:17:56.653964 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:17:56.654111 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:56.655927 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:17:56.655966 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:56.657110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:17:56.657142 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:56.658494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:17:56.658558 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:56.660758 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:17:56.660803 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:56.662683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:56.662723 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:56.677475 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:17:56.678265 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:17:56.678340 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:56.680123 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:56.680167 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:56.683553 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:17:56.683635 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:17:56.685413 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:17:56.688238 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:17:56.697311 systemd[1]: Switching root. Feb 13 19:17:56.725270 systemd-journald[238]: Journal stopped Feb 13 19:17:57.397177 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Feb 13 19:17:57.397237 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:17:57.397249 kernel: SELinux: policy capability open_perms=1 Feb 13 19:17:57.397258 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:17:57.397290 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:17:57.397301 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:17:57.397310 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:17:57.397319 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:17:57.397329 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:17:57.397338 kernel: audit: type=1403 audit(1739474276.833:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:17:57.397351 systemd[1]: Successfully loaded SELinux policy in 32.540ms. Feb 13 19:17:57.397367 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.583ms. Feb 13 19:17:57.397379 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:17:57.397390 systemd[1]: Detected virtualization kvm. Feb 13 19:17:57.397399 systemd[1]: Detected architecture arm64. Feb 13 19:17:57.397410 systemd[1]: Detected first boot. Feb 13 19:17:57.397419 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:17:57.397434 zram_generator::config[1051]: No configuration found. Feb 13 19:17:57.397445 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:17:57.397455 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:17:57.397465 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:17:57.397479 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:17:57.397489 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:17:57.397499 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:57.397519 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:17:57.397532 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:17:57.397544 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:17:57.397558 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:17:57.397568 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:17:57.397579 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:17:57.397589 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:17:57.397600 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:17:57.397612 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:57.397623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:57.397633 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:17:57.397643 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:17:57.397654 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:17:57.397664 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:57.397676 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 19:17:57.397686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:57.397696 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:17:57.397707 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:17:57.397722 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:57.397732 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:17:57.397742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:57.397753 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:57.397763 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:57.397773 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:57.397783 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:17:57.397795 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:17:57.397806 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:17:57.397816 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:57.397826 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:57.397837 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:57.397847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:17:57.397856 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:17:57.397867 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:17:57.397877 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:17:57.397889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:17:57.397900 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:17:57.397910 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:17:57.397920 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:17:57.397931 systemd[1]: Reached target machines.target - Containers. Feb 13 19:17:57.397940 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:17:57.397951 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:57.397961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:57.397971 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:17:57.397983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:57.397993 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:57.398003 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:57.398013 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:17:57.398023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:57.398033 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:17:57.398043 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:17:57.398053 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:17:57.398065 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:17:57.398075 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:17:57.398085 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:57.398095 kernel: fuse: init (API version 7.39) Feb 13 19:17:57.398105 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:57.398115 kernel: loop: module loaded Feb 13 19:17:57.398125 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:57.398136 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:17:57.398145 kernel: ACPI: bus type drm_connector registered Feb 13 19:17:57.398157 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:17:57.398167 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:17:57.398177 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:57.398187 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:17:57.398198 systemd[1]: Stopped verity-setup.service. Feb 13 19:17:57.398208 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:17:57.398218 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:17:57.398228 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:17:57.398238 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:17:57.398248 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:17:57.398258 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:17:57.398268 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:17:57.398306 systemd-journald[1121]: Collecting audit messages is disabled. Feb 13 19:17:57.398330 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:57.398341 systemd-journald[1121]: Journal started Feb 13 19:17:57.398360 systemd-journald[1121]: Runtime Journal (/run/log/journal/ac3b0616d64849b3b2ccd2ef46bd8729) is 5.9M, max 47.3M, 41.4M free. Feb 13 19:17:57.216963 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:17:57.227227 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:17:57.227602 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:17:57.403418 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:57.402457 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:17:57.402627 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:17:57.403903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:57.404149 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:57.405467 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:57.405721 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:57.406794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:57.407014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:57.408237 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:17:57.408506 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:17:57.409784 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:57.410015 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:57.411184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:57.412531 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:17:57.413799 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:17:57.415066 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:17:57.426803 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:17:57.432355 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:17:57.434092 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:17:57.434972 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:17:57.435003 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:57.436673 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:17:57.438526 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:17:57.440289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:17:57.441128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:57.442222 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:17:57.444011 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:17:57.444989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:57.446138 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:17:57.447066 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:57.451479 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:57.455533 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:17:57.457297 systemd-journald[1121]: Time spent on flushing to /var/log/journal/ac3b0616d64849b3b2ccd2ef46bd8729 is 13.571ms for 848 entries. Feb 13 19:17:57.457297 systemd-journald[1121]: System Journal (/var/log/journal/ac3b0616d64849b3b2ccd2ef46bd8729) is 8M, max 195.6M, 187.6M free. Feb 13 19:17:57.478135 systemd-journald[1121]: Received client request to flush runtime journal. Feb 13 19:17:57.457720 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:17:57.465343 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:57.466458 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:17:57.467548 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:17:57.468918 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:17:57.470146 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:17:57.474212 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:17:57.480464 kernel: loop0: detected capacity change from 0 to 123192 Feb 13 19:17:57.487538 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:17:57.490455 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:17:57.491896 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:17:57.497298 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:17:57.493363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:57.494428 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:17:57.499092 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:57.506451 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:17:57.521705 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:17:57.535297 kernel: loop1: detected capacity change from 0 to 113512 Feb 13 19:17:57.537236 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:17:57.537253 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Feb 13 19:17:57.541682 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:57.588498 kernel: loop2: detected capacity change from 0 to 201592 Feb 13 19:17:57.624398 kernel: loop3: detected capacity change from 0 to 123192 Feb 13 19:17:57.631325 kernel: loop4: detected capacity change from 0 to 113512 Feb 13 19:17:57.637316 kernel: loop5: detected capacity change from 0 to 201592 Feb 13 19:17:57.644119 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:17:57.644667 (sd-merge)[1194]: Merged extensions into '/usr'. Feb 13 19:17:57.650037 systemd[1]: Reload requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:17:57.650058 systemd[1]: Reloading... Feb 13 19:17:57.719875 zram_generator::config[1222]: No configuration found. Feb 13 19:17:57.720269 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:17:57.808636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:57.858490 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:17:57.858703 systemd[1]: Reloading finished in 208 ms. Feb 13 19:17:57.880837 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:17:57.882095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:17:57.901130 systemd[1]: Starting ensure-sysext.service... Feb 13 19:17:57.903016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:57.913405 systemd[1]: Reload requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:17:57.913421 systemd[1]: Reloading... Feb 13 19:17:57.919768 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:17:57.920343 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:17:57.921074 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:17:57.921403 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Feb 13 19:17:57.921553 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Feb 13 19:17:57.924180 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:57.924301 systemd-tmpfiles[1257]: Skipping /boot Feb 13 19:17:57.932930 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:57.933048 systemd-tmpfiles[1257]: Skipping /boot Feb 13 19:17:57.957350 zram_generator::config[1282]: No configuration found. Feb 13 19:17:58.042061 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:58.091517 systemd[1]: Reloading finished in 177 ms. Feb 13 19:17:58.104812 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:17:58.124310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:58.132000 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:58.134393 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:17:58.136566 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:17:58.140656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:58.148609 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:58.151407 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:17:58.155193 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:58.158546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:58.160611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:58.165805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:58.166811 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:58.167070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:58.171056 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:17:58.174463 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:17:58.175860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:58.176033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:58.177552 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:58.177702 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:58.182729 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:58.182916 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:58.188325 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:58.200639 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:58.204541 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:58.204638 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Feb 13 19:17:58.206994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:58.207926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:58.208086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:58.211911 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:17:58.214861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:58.215347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:58.217954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:58.219036 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:58.221221 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:17:58.222766 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:17:58.224377 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:58.224554 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:58.225914 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:17:58.231151 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:17:58.238527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:58.246207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:17:58.253823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:58.255899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:58.258062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:58.262180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:58.263576 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:58.263699 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:58.267045 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:58.268328 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:17:58.269375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:58.271317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:58.272569 augenrules[1390]: No rules Feb 13 19:17:58.273402 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:58.273719 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:58.275135 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:58.275433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:58.276714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:58.276880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:58.279078 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:58.279259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:58.283341 systemd[1]: Finished ensure-sysext.service. Feb 13 19:17:58.295389 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:58.295450 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:58.298524 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:17:58.301664 systemd-resolved[1325]: Positive Trust Anchors: Feb 13 19:17:58.303461 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:58.303496 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:58.310842 systemd-resolved[1325]: Defaulting to hostname 'linux'. Feb 13 19:17:58.319354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:58.320564 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 19:17:58.320596 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:58.358990 systemd-networkd[1396]: lo: Link UP Feb 13 19:17:58.359035 systemd-networkd[1396]: lo: Gained carrier Feb 13 19:17:58.360625 systemd-networkd[1396]: Enumeration completed Feb 13 19:17:58.360722 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:58.361285 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:58.361292 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:17:58.362028 systemd-networkd[1396]: eth0: Link UP Feb 13 19:17:58.362035 systemd-networkd[1396]: eth0: Gained carrier Feb 13 19:17:58.362048 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:17:58.362377 systemd[1]: Reached target network.target - Network. Feb 13 19:17:58.371325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1373) Feb 13 19:17:58.376556 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:17:58.379350 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.114/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:17:58.379484 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:17:58.380422 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:17:58.380610 systemd-timesyncd[1405]: Network configuration changed, trying to establish connection. Feb 13 19:17:58.383164 systemd-timesyncd[1405]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:17:58.383204 systemd-timesyncd[1405]: Initial clock synchronization to Thu 2025-02-13 19:17:58.623115 UTC. Feb 13 19:17:58.390598 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:17:58.400758 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:17:58.406623 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:17:58.412487 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:17:58.424017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:17:58.438571 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:58.451451 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:17:58.460481 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:17:58.470737 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:58.475447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:58.502911 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:17:58.504080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:58.506379 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:58.507199 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:17:58.508136 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:17:58.509255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:17:58.510126 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:17:58.511090 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:17:58.512147 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:17:58.512184 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:58.512874 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:58.514086 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:17:58.516225 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:17:58.519176 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:17:58.520377 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:17:58.521301 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:17:58.529257 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:17:58.530815 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:17:58.533011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:17:58.534437 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:17:58.535330 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:58.536032 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:58.536747 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:58.536780 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:58.537713 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:17:58.539468 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:17:58.542351 lvm[1434]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:58.542826 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:17:58.547517 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:17:58.548612 jq[1437]: false Feb 13 19:17:58.548266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:17:58.549311 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:17:58.553326 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:17:58.557521 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:17:58.562327 extend-filesystems[1438]: Found loop3 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found loop4 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found loop5 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda1 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda2 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda3 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found usr Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda4 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda6 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda7 Feb 13 19:17:58.562327 extend-filesystems[1438]: Found vda9 Feb 13 19:17:58.562327 extend-filesystems[1438]: Checking size of /dev/vda9 Feb 13 19:17:58.563211 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:17:58.564099 dbus-daemon[1436]: [system] SELinux support is enabled Feb 13 19:17:58.565692 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:17:58.566159 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:17:58.567131 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:17:58.576456 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:17:58.578097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:17:58.579395 extend-filesystems[1438]: Resized partition /dev/vda9 Feb 13 19:17:58.586369 jq[1454]: true Feb 13 19:17:58.581713 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:17:58.588080 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:17:58.588252 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:17:58.588663 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:17:58.588826 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:17:58.589993 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:17:58.590162 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:17:58.596119 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:17:58.604385 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:17:58.604245 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:17:58.605057 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:17:58.605978 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:17:58.606332 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:17:58.606354 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:17:58.611028 jq[1460]: true Feb 13 19:17:58.616404 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1377) Feb 13 19:17:58.616473 update_engine[1450]: I20250213 19:17:58.615207 1450 main.cc:92] Flatcar Update Engine starting Feb 13 19:17:58.620842 update_engine[1450]: I20250213 19:17:58.620787 1450 update_check_scheduler.cc:74] Next update check in 7m30s Feb 13 19:17:58.627093 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:17:58.633477 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:17:58.637308 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:17:58.658131 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:17:58.658131 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:17:58.658131 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:17:58.660918 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Feb 13 19:17:58.660492 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:17:58.660726 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:17:58.671023 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 19:17:58.673037 systemd-logind[1445]: New seat seat0. Feb 13 19:17:58.677358 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:17:58.679992 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:17:58.682395 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:17:58.684355 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:17:58.687977 locksmithd[1471]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:17:58.787488 containerd[1465]: time="2025-02-13T19:17:58.787404040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:17:58.813770 containerd[1465]: time="2025-02-13T19:17:58.813670200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815102 containerd[1465]: time="2025-02-13T19:17:58.815068160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815102 containerd[1465]: time="2025-02-13T19:17:58.815100120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:17:58.815147 containerd[1465]: time="2025-02-13T19:17:58.815115920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:17:58.815306 containerd[1465]: time="2025-02-13T19:17:58.815269880Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:17:58.815341 containerd[1465]: time="2025-02-13T19:17:58.815308960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815383 containerd[1465]: time="2025-02-13T19:17:58.815364960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815383 containerd[1465]: time="2025-02-13T19:17:58.815379560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815597 containerd[1465]: time="2025-02-13T19:17:58.815575360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815597 containerd[1465]: time="2025-02-13T19:17:58.815595480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815637 containerd[1465]: time="2025-02-13T19:17:58.815608200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815637 containerd[1465]: time="2025-02-13T19:17:58.815617640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815704 containerd[1465]: time="2025-02-13T19:17:58.815687800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.815985 containerd[1465]: time="2025-02-13T19:17:58.815962520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:17:58.816112 containerd[1465]: time="2025-02-13T19:17:58.816091720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:17:58.816112 containerd[1465]: time="2025-02-13T19:17:58.816110000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:17:58.816199 containerd[1465]: time="2025-02-13T19:17:58.816182680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:17:58.816241 containerd[1465]: time="2025-02-13T19:17:58.816227320Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:17:58.820526 containerd[1465]: time="2025-02-13T19:17:58.820490440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:17:58.820573 containerd[1465]: time="2025-02-13T19:17:58.820544560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:17:58.820591 containerd[1465]: time="2025-02-13T19:17:58.820571800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:17:58.820591 containerd[1465]: time="2025-02-13T19:17:58.820587400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:17:58.820642 containerd[1465]: time="2025-02-13T19:17:58.820600720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.820752560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.820972480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821058960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821073960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821088440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821101480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821113480Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821126480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821138720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821151960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821163440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821174680Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821185760Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:17:58.821306 containerd[1465]: time="2025-02-13T19:17:58.821210000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821224080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821236040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821248040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821259680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821288480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821301200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821313960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821327240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821340920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821352960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821367840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821379960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821393560Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821413200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821618 containerd[1465]: time="2025-02-13T19:17:58.821438880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821452560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821624920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821642880Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821652440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821663880Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821672400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821683600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821693960Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:17:58.821860 containerd[1465]: time="2025-02-13T19:17:58.821708080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:17:58.822088 containerd[1465]: time="2025-02-13T19:17:58.822019320Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:17:58.822088 containerd[1465]: time="2025-02-13T19:17:58.822073040Z" level=info msg="Connect containerd service" Feb 13 19:17:58.822211 containerd[1465]: time="2025-02-13T19:17:58.822109280Z" level=info msg="using legacy CRI server" Feb 13 19:17:58.822211 containerd[1465]: time="2025-02-13T19:17:58.822116480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:17:58.822392 containerd[1465]: time="2025-02-13T19:17:58.822357520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:17:58.823115 containerd[1465]: time="2025-02-13T19:17:58.823071600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:17:58.823346 containerd[1465]: time="2025-02-13T19:17:58.823306400Z" level=info msg="Start subscribing containerd event" Feb 13 19:17:58.823376 containerd[1465]: time="2025-02-13T19:17:58.823368160Z" level=info msg="Start recovering state" Feb 13 19:17:58.823540 containerd[1465]: time="2025-02-13T19:17:58.823433840Z" level=info msg="Start event monitor" Feb 13 19:17:58.823540 containerd[1465]: time="2025-02-13T19:17:58.823449160Z" level=info msg="Start snapshots syncer" Feb 13 19:17:58.823540 containerd[1465]: time="2025-02-13T19:17:58.823459080Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:17:58.823540 containerd[1465]: time="2025-02-13T19:17:58.823466240Z" level=info msg="Start streaming server" Feb 13 19:17:58.823681 containerd[1465]: time="2025-02-13T19:17:58.823660160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:17:58.823709 containerd[1465]: time="2025-02-13T19:17:58.823703760Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:17:58.823833 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:17:58.824980 containerd[1465]: time="2025-02-13T19:17:58.824947720Z" level=info msg="containerd successfully booted in 0.038378s" Feb 13 19:17:58.837208 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:17:58.855308 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:17:58.866528 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:17:58.871147 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:17:58.871376 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:17:58.874143 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:17:58.887335 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:17:58.889795 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:17:58.891572 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 19:17:58.892597 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:17:59.842048 systemd-networkd[1396]: eth0: Gained IPv6LL Feb 13 19:17:59.844931 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:17:59.846926 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:17:59.856663 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:17:59.858915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:17:59.860924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:17:59.908576 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:17:59.908801 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:17:59.911739 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:17:59.913563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:18:00.471251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:00.472619 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:18:00.473546 systemd[1]: Startup finished in 599ms (kernel) + 4.134s (initrd) + 3.673s (userspace) = 8.407s. Feb 13 19:18:00.475907 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:00.940376 kubelet[1541]: E0213 19:18:00.940216 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:00.942811 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:00.942960 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:00.943312 systemd[1]: kubelet.service: Consumed 783ms CPU time, 248.3M memory peak. Feb 13 19:18:05.336879 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:18:05.338250 systemd[1]: Started sshd@0-10.0.0.114:22-10.0.0.1:51356.service - OpenSSH per-connection server daemon (10.0.0.1:51356). Feb 13 19:18:05.406796 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 51356 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:05.410385 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:05.421685 systemd-logind[1445]: New session 1 of user core. Feb 13 19:18:05.422639 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:18:05.433526 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:18:05.441899 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:18:05.443945 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:18:05.450434 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:18:05.452404 systemd-logind[1445]: New session c1 of user core. Feb 13 19:18:05.564635 systemd[1558]: Queued start job for default target default.target. Feb 13 19:18:05.576268 systemd[1558]: Created slice app.slice - User Application Slice. Feb 13 19:18:05.576317 systemd[1558]: Reached target paths.target - Paths. Feb 13 19:18:05.576362 systemd[1558]: Reached target timers.target - Timers. Feb 13 19:18:05.577697 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:18:05.591518 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:18:05.591634 systemd[1558]: Reached target sockets.target - Sockets. Feb 13 19:18:05.591671 systemd[1558]: Reached target basic.target - Basic System. Feb 13 19:18:05.591716 systemd[1558]: Reached target default.target - Main User Target. Feb 13 19:18:05.591745 systemd[1558]: Startup finished in 134ms. Feb 13 19:18:05.592601 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:18:05.595430 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:18:05.658480 systemd[1]: Started sshd@1-10.0.0.114:22-10.0.0.1:51360.service - OpenSSH per-connection server daemon (10.0.0.1:51360). Feb 13 19:18:05.712473 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 51360 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:05.713730 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:05.718326 systemd-logind[1445]: New session 2 of user core. Feb 13 19:18:05.728518 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:18:05.780851 sshd[1571]: Connection closed by 10.0.0.1 port 51360 Feb 13 19:18:05.782015 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:05.793255 systemd[1]: Started sshd@2-10.0.0.114:22-10.0.0.1:51366.service - OpenSSH per-connection server daemon (10.0.0.1:51366). Feb 13 19:18:05.793740 systemd[1]: sshd@1-10.0.0.114:22-10.0.0.1:51360.service: Deactivated successfully. Feb 13 19:18:05.796742 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:18:05.798004 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:18:05.799092 systemd-logind[1445]: Removed session 2. Feb 13 19:18:05.842375 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 51366 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:05.843735 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:05.848553 systemd-logind[1445]: New session 3 of user core. Feb 13 19:18:05.859506 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:18:05.908352 sshd[1579]: Connection closed by 10.0.0.1 port 51366 Feb 13 19:18:05.909047 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:05.919777 systemd[1]: sshd@2-10.0.0.114:22-10.0.0.1:51366.service: Deactivated successfully. Feb 13 19:18:05.922682 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:18:05.923333 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:18:05.935596 systemd[1]: Started sshd@3-10.0.0.114:22-10.0.0.1:51378.service - OpenSSH per-connection server daemon (10.0.0.1:51378). Feb 13 19:18:05.936709 systemd-logind[1445]: Removed session 3. Feb 13 19:18:05.971787 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 51378 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:05.973018 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:05.976997 systemd-logind[1445]: New session 4 of user core. Feb 13 19:18:05.989487 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:18:06.040744 sshd[1587]: Connection closed by 10.0.0.1 port 51378 Feb 13 19:18:06.041298 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:06.057140 systemd[1]: sshd@3-10.0.0.114:22-10.0.0.1:51378.service: Deactivated successfully. Feb 13 19:18:06.059126 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:18:06.059986 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:18:06.070614 systemd[1]: Started sshd@4-10.0.0.114:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Feb 13 19:18:06.071756 systemd-logind[1445]: Removed session 4. Feb 13 19:18:06.107702 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:06.108863 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:06.114341 systemd-logind[1445]: New session 5 of user core. Feb 13 19:18:06.125797 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:18:06.197017 sudo[1596]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:18:06.197342 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:06.216167 sudo[1596]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:06.217654 sshd[1595]: Connection closed by 10.0.0.1 port 51388 Feb 13 19:18:06.218028 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:06.233398 systemd[1]: sshd@4-10.0.0.114:22-10.0.0.1:51388.service: Deactivated successfully. Feb 13 19:18:06.235453 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:18:06.236304 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:18:06.238643 systemd[1]: Started sshd@5-10.0.0.114:22-10.0.0.1:51400.service - OpenSSH per-connection server daemon (10.0.0.1:51400). Feb 13 19:18:06.239602 systemd-logind[1445]: Removed session 5. Feb 13 19:18:06.278919 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 51400 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:06.280178 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:06.284351 systemd-logind[1445]: New session 6 of user core. Feb 13 19:18:06.293434 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:18:06.345812 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:18:06.346088 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:06.349085 sudo[1606]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:06.354214 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:18:06.354560 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:06.374708 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:18:06.400277 augenrules[1628]: No rules Feb 13 19:18:06.401578 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:18:06.402388 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:18:06.403341 sudo[1605]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:06.405766 sshd[1604]: Connection closed by 10.0.0.1 port 51400 Feb 13 19:18:06.405633 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:06.422614 systemd[1]: Started sshd@6-10.0.0.114:22-10.0.0.1:51404.service - OpenSSH per-connection server daemon (10.0.0.1:51404). Feb 13 19:18:06.423068 systemd[1]: sshd@5-10.0.0.114:22-10.0.0.1:51400.service: Deactivated successfully. Feb 13 19:18:06.424715 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:18:06.427802 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:18:06.429009 systemd-logind[1445]: Removed session 6. Feb 13 19:18:06.468175 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 51404 ssh2: RSA SHA256:QyQQN4NlJHXH6/vW7NxDLOKgT/2dxBjCkGLAHoHnd3w Feb 13 19:18:06.469418 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:18:06.474495 systemd-logind[1445]: New session 7 of user core. Feb 13 19:18:06.480554 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:18:06.534430 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:18:06.535017 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:18:06.559667 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:18:06.579593 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:18:06.579801 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:18:07.077874 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:07.078028 systemd[1]: kubelet.service: Consumed 783ms CPU time, 248.3M memory peak. Feb 13 19:18:07.088549 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:07.108701 systemd[1]: Reload requested from client PID 1681 ('systemctl') (unit session-7.scope)... Feb 13 19:18:07.108718 systemd[1]: Reloading... Feb 13 19:18:07.185339 zram_generator::config[1727]: No configuration found. Feb 13 19:18:07.388799 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:18:07.464133 systemd[1]: Reloading finished in 355 ms. Feb 13 19:18:07.510596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:07.513521 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:07.514388 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:18:07.515415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:07.515472 systemd[1]: kubelet.service: Consumed 87ms CPU time, 90.2M memory peak. Feb 13 19:18:07.517135 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:07.614430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:07.618235 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:18:07.655632 kubelet[1771]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:18:07.655632 kubelet[1771]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:18:07.655632 kubelet[1771]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:18:07.656341 kubelet[1771]: I0213 19:18:07.655650 1771 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:18:08.378066 kubelet[1771]: I0213 19:18:08.377840 1771 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:18:08.378066 kubelet[1771]: I0213 19:18:08.377876 1771 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:18:08.378350 kubelet[1771]: I0213 19:18:08.378170 1771 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:18:08.439855 kubelet[1771]: I0213 19:18:08.439813 1771 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:18:08.452229 kubelet[1771]: E0213 19:18:08.452168 1771 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:18:08.452229 kubelet[1771]: I0213 19:18:08.452219 1771 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:18:08.456641 kubelet[1771]: I0213 19:18:08.456536 1771 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:18:08.457385 kubelet[1771]: I0213 19:18:08.457338 1771 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:18:08.458102 kubelet[1771]: I0213 19:18:08.457471 1771 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.114","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:18:08.458102 kubelet[1771]: I0213 19:18:08.457765 1771 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:18:08.458102 kubelet[1771]: I0213 19:18:08.457773 1771 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:18:08.458102 kubelet[1771]: I0213 19:18:08.458008 1771 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:18:08.470242 kubelet[1771]: I0213 19:18:08.470197 1771 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:18:08.470242 kubelet[1771]: I0213 19:18:08.470234 1771 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:18:08.470242 kubelet[1771]: I0213 19:18:08.470257 1771 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:18:08.470435 kubelet[1771]: I0213 19:18:08.470268 1771 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:18:08.470942 kubelet[1771]: E0213 19:18:08.470911 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:08.471707 kubelet[1771]: E0213 19:18:08.471670 1771 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:08.475613 kubelet[1771]: I0213 19:18:08.475566 1771 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:18:08.476278 kubelet[1771]: I0213 19:18:08.476249 1771 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:18:08.476414 kubelet[1771]: W0213 19:18:08.476399 1771 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:18:08.477537 kubelet[1771]: I0213 19:18:08.477516 1771 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:18:08.477603 kubelet[1771]: I0213 19:18:08.477559 1771 server.go:1287] "Started kubelet" Feb 13 19:18:08.478325 kubelet[1771]: I0213 19:18:08.477681 1771 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:18:08.478735 kubelet[1771]: I0213 19:18:08.478703 1771 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:18:08.482225 kubelet[1771]: I0213 19:18:08.481881 1771 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:18:08.482752 kubelet[1771]: I0213 19:18:08.482561 1771 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:18:08.482752 kubelet[1771]: I0213 19:18:08.481706 1771 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:18:08.483087 kubelet[1771]: I0213 19:18:08.482946 1771 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:18:08.484743 kubelet[1771]: I0213 19:18:08.483599 1771 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:18:08.484743 kubelet[1771]: I0213 19:18:08.483721 1771 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:18:08.484743 kubelet[1771]: I0213 19:18:08.483774 1771 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:18:08.484743 kubelet[1771]: E0213 19:18:08.483812 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:08.484964 kubelet[1771]: I0213 19:18:08.484894 1771 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:18:08.486481 kubelet[1771]: E0213 19:18:08.486397 1771 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:18:08.488265 kubelet[1771]: I0213 19:18:08.487095 1771 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:18:08.488265 kubelet[1771]: I0213 19:18:08.487115 1771 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:18:08.501358 kubelet[1771]: W0213 19:18:08.500226 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:18:08.501358 kubelet[1771]: E0213 19:18:08.500307 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:18:08.501358 kubelet[1771]: E0213 19:18:08.500357 1771 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:18:08.501358 kubelet[1771]: W0213 19:18:08.500398 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:18:08.501358 kubelet[1771]: E0213 19:18:08.500420 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:18:08.501358 kubelet[1771]: W0213 19:18:08.500463 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.114" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:18:08.501358 kubelet[1771]: E0213 19:18:08.500479 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.114\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:18:08.501631 kubelet[1771]: E0213 19:18:08.500601 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.1823daa634a0f4a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2025-02-13 19:18:08.477533348 +0000 UTC m=+0.855410344,LastTimestamp:2025-02-13 19:18:08.477533348 +0000 UTC m=+0.855410344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Feb 13 19:18:08.502151 kubelet[1771]: E0213 19:18:08.502042 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.1823daa63527ff41 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2025-02-13 19:18:08.486383425 +0000 UTC m=+0.864260420,LastTimestamp:2025-02-13 19:18:08.486383425 +0000 UTC m=+0.864260420,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Feb 13 19:18:08.503665 kubelet[1771]: I0213 19:18:08.502726 1771 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:18:08.503665 kubelet[1771]: I0213 19:18:08.502745 1771 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:18:08.503665 kubelet[1771]: I0213 19:18:08.502764 1771 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:18:08.509536 kubelet[1771]: E0213 19:18:08.509442 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.1823daa63609e832 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.114 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2025-02-13 19:18:08.501188658 +0000 UTC m=+0.879065613,LastTimestamp:2025-02-13 19:18:08.501188658 +0000 UTC m=+0.879065613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Feb 13 19:18:08.512768 kubelet[1771]: E0213 19:18:08.512627 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.1823daa6360a1158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.114 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2025-02-13 19:18:08.501199192 +0000 UTC m=+0.879076188,LastTimestamp:2025-02-13 19:18:08.501199192 +0000 UTC m=+0.879076188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Feb 13 19:18:08.518237 kubelet[1771]: E0213 19:18:08.518144 1771 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.114.1823daa6360a2301 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.114,UID:10.0.0.114,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.114 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.114,},FirstTimestamp:2025-02-13 19:18:08.501203713 +0000 UTC m=+0.879080708,LastTimestamp:2025-02-13 19:18:08.501203713 +0000 UTC m=+0.879080708,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.114,}" Feb 13 19:18:08.573710 kubelet[1771]: I0213 19:18:08.573669 1771 policy_none.go:49] "None policy: Start" Feb 13 19:18:08.573710 kubelet[1771]: I0213 19:18:08.573713 1771 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:18:08.573846 kubelet[1771]: I0213 19:18:08.573726 1771 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:18:08.580610 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:18:08.583980 kubelet[1771]: E0213 19:18:08.583945 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:08.590578 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:18:08.594030 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:18:08.598971 kubelet[1771]: I0213 19:18:08.598923 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:18:08.600077 kubelet[1771]: I0213 19:18:08.599900 1771 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:18:08.600077 kubelet[1771]: I0213 19:18:08.599927 1771 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:18:08.600077 kubelet[1771]: I0213 19:18:08.599950 1771 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:18:08.600077 kubelet[1771]: I0213 19:18:08.599960 1771 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:18:08.600077 kubelet[1771]: E0213 19:18:08.600003 1771 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:18:08.602770 kubelet[1771]: I0213 19:18:08.602730 1771 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:18:08.603723 kubelet[1771]: I0213 19:18:08.602947 1771 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:18:08.603723 kubelet[1771]: I0213 19:18:08.602966 1771 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:18:08.603723 kubelet[1771]: W0213 19:18:08.603561 1771 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 13 19:18:08.603723 kubelet[1771]: E0213 19:18:08.603592 1771 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:18:08.603943 kubelet[1771]: I0213 19:18:08.603922 1771 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:18:08.604138 kubelet[1771]: E0213 19:18:08.604120 1771 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:18:08.604238 kubelet[1771]: E0213 19:18:08.604224 1771 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.114\" not found" Feb 13 19:18:08.704199 kubelet[1771]: I0213 19:18:08.704096 1771 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.114" Feb 13 19:18:08.710793 kubelet[1771]: E0213 19:18:08.710759 1771 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.114\" not found" node="10.0.0.114" Feb 13 19:18:08.711209 kubelet[1771]: I0213 19:18:08.711156 1771 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.114" Feb 13 19:18:08.711209 kubelet[1771]: E0213 19:18:08.711171 1771 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.114\": node \"10.0.0.114\" not found" Feb 13 19:18:08.715951 kubelet[1771]: E0213 19:18:08.715919 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:08.816661 kubelet[1771]: E0213 19:18:08.816622 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:08.916805 kubelet[1771]: E0213 19:18:08.916756 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.017344 kubelet[1771]: E0213 19:18:09.017207 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.117505 kubelet[1771]: E0213 19:18:09.117434 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.142877 sudo[1640]: pam_unix(sudo:session): session closed for user root Feb 13 19:18:09.144340 sshd[1639]: Connection closed by 10.0.0.1 port 51404 Feb 13 19:18:09.144756 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Feb 13 19:18:09.147985 systemd[1]: sshd@6-10.0.0.114:22-10.0.0.1:51404.service: Deactivated successfully. Feb 13 19:18:09.149808 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:18:09.150585 systemd[1]: session-7.scope: Consumed 470ms CPU time, 74.8M memory peak. Feb 13 19:18:09.151688 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:18:09.152499 systemd-logind[1445]: Removed session 7. Feb 13 19:18:09.218174 kubelet[1771]: E0213 19:18:09.218109 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.319172 kubelet[1771]: E0213 19:18:09.319032 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.380726 kubelet[1771]: I0213 19:18:09.380653 1771 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:18:09.419581 kubelet[1771]: E0213 19:18:09.419516 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.471144 kubelet[1771]: E0213 19:18:09.471083 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:09.519784 kubelet[1771]: E0213 19:18:09.519720 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.620508 kubelet[1771]: E0213 19:18:09.620388 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.720769 kubelet[1771]: E0213 19:18:09.720687 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.821888 kubelet[1771]: E0213 19:18:09.821821 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:09.922168 kubelet[1771]: E0213 19:18:09.922016 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:10.022238 kubelet[1771]: E0213 19:18:10.022186 1771 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.114\" not found" Feb 13 19:18:10.123640 kubelet[1771]: I0213 19:18:10.123611 1771 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:18:10.124056 containerd[1465]: time="2025-02-13T19:18:10.123958935Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:18:10.124404 kubelet[1771]: I0213 19:18:10.124137 1771 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:18:10.471997 kubelet[1771]: I0213 19:18:10.471926 1771 apiserver.go:52] "Watching apiserver" Feb 13 19:18:10.471997 kubelet[1771]: E0213 19:18:10.471998 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:10.477192 kubelet[1771]: E0213 19:18:10.477140 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:10.484016 systemd[1]: Created slice kubepods-besteffort-podd50d3918_d346_4f85_a385_6a1e73bc9631.slice - libcontainer container kubepods-besteffort-podd50d3918_d346_4f85_a385_6a1e73bc9631.slice. Feb 13 19:18:10.484356 kubelet[1771]: I0213 19:18:10.484241 1771 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:18:10.495782 kubelet[1771]: I0213 19:18:10.495680 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-policysync\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.495782 kubelet[1771]: I0213 19:18:10.495722 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2cd1dfd2-2e34-43a6-ae60-791be44ed577-registration-dir\") pod \"csi-node-driver-czrzd\" (UID: \"2cd1dfd2-2e34-43a6-ae60-791be44ed577\") " pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:10.495782 kubelet[1771]: I0213 19:18:10.495744 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g9w8\" (UniqueName: \"kubernetes.io/projected/2cd1dfd2-2e34-43a6-ae60-791be44ed577-kube-api-access-4g9w8\") pod \"csi-node-driver-czrzd\" (UID: \"2cd1dfd2-2e34-43a6-ae60-791be44ed577\") " pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:10.495782 kubelet[1771]: I0213 19:18:10.495760 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-lib-modules\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.495782 kubelet[1771]: I0213 19:18:10.495776 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-cni-net-dir\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496082 kubelet[1771]: I0213 19:18:10.495806 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-flexvol-driver-host\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496082 kubelet[1771]: I0213 19:18:10.495833 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn2th\" (UniqueName: \"kubernetes.io/projected/d50d3918-d346-4f85-a385-6a1e73bc9631-kube-api-access-kn2th\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496082 kubelet[1771]: I0213 19:18:10.495850 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e89d1f70-7062-4a27-b8bf-c11b132a1d84-lib-modules\") pod \"kube-proxy-8hk4q\" (UID: \"e89d1f70-7062-4a27-b8bf-c11b132a1d84\") " pod="kube-system/kube-proxy-8hk4q" Feb 13 19:18:10.496082 kubelet[1771]: I0213 19:18:10.495866 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28cp6\" (UniqueName: \"kubernetes.io/projected/e89d1f70-7062-4a27-b8bf-c11b132a1d84-kube-api-access-28cp6\") pod \"kube-proxy-8hk4q\" (UID: \"e89d1f70-7062-4a27-b8bf-c11b132a1d84\") " pod="kube-system/kube-proxy-8hk4q" Feb 13 19:18:10.496082 kubelet[1771]: I0213 19:18:10.495892 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d50d3918-d346-4f85-a385-6a1e73bc9631-tigera-ca-bundle\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496195 kubelet[1771]: I0213 19:18:10.495916 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-var-lib-calico\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496195 kubelet[1771]: I0213 19:18:10.495936 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-cni-bin-dir\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496195 kubelet[1771]: I0213 19:18:10.495951 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2cd1dfd2-2e34-43a6-ae60-791be44ed577-kubelet-dir\") pod \"csi-node-driver-czrzd\" (UID: \"2cd1dfd2-2e34-43a6-ae60-791be44ed577\") " pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:10.496195 kubelet[1771]: I0213 19:18:10.496016 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e89d1f70-7062-4a27-b8bf-c11b132a1d84-kube-proxy\") pod \"kube-proxy-8hk4q\" (UID: \"e89d1f70-7062-4a27-b8bf-c11b132a1d84\") " pod="kube-system/kube-proxy-8hk4q" Feb 13 19:18:10.496195 kubelet[1771]: I0213 19:18:10.496048 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e89d1f70-7062-4a27-b8bf-c11b132a1d84-xtables-lock\") pod \"kube-proxy-8hk4q\" (UID: \"e89d1f70-7062-4a27-b8bf-c11b132a1d84\") " pod="kube-system/kube-proxy-8hk4q" Feb 13 19:18:10.496415 kubelet[1771]: I0213 19:18:10.496064 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-xtables-lock\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496415 kubelet[1771]: I0213 19:18:10.496084 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d50d3918-d346-4f85-a385-6a1e73bc9631-node-certs\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496415 kubelet[1771]: I0213 19:18:10.496101 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-var-run-calico\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496415 kubelet[1771]: I0213 19:18:10.496127 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d50d3918-d346-4f85-a385-6a1e73bc9631-cni-log-dir\") pod \"calico-node-5crj6\" (UID: \"d50d3918-d346-4f85-a385-6a1e73bc9631\") " pod="calico-system/calico-node-5crj6" Feb 13 19:18:10.496415 kubelet[1771]: I0213 19:18:10.496149 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2cd1dfd2-2e34-43a6-ae60-791be44ed577-varrun\") pod \"csi-node-driver-czrzd\" (UID: \"2cd1dfd2-2e34-43a6-ae60-791be44ed577\") " pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:10.496571 kubelet[1771]: I0213 19:18:10.496169 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2cd1dfd2-2e34-43a6-ae60-791be44ed577-socket-dir\") pod \"csi-node-driver-czrzd\" (UID: \"2cd1dfd2-2e34-43a6-ae60-791be44ed577\") " pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:10.505791 systemd[1]: Created slice kubepods-besteffort-pode89d1f70_7062_4a27_b8bf_c11b132a1d84.slice - libcontainer container kubepods-besteffort-pode89d1f70_7062_4a27_b8bf_c11b132a1d84.slice. Feb 13 19:18:10.597508 kubelet[1771]: E0213 19:18:10.597477 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.597881 kubelet[1771]: W0213 19:18:10.597749 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.597881 kubelet[1771]: E0213 19:18:10.597788 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.598130 kubelet[1771]: E0213 19:18:10.597983 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.598130 kubelet[1771]: W0213 19:18:10.597994 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.598130 kubelet[1771]: E0213 19:18:10.598014 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.598255 kubelet[1771]: E0213 19:18:10.598243 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.598316 kubelet[1771]: W0213 19:18:10.598305 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.598377 kubelet[1771]: E0213 19:18:10.598366 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.598655 kubelet[1771]: E0213 19:18:10.598629 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.598655 kubelet[1771]: W0213 19:18:10.598647 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.598737 kubelet[1771]: E0213 19:18:10.598667 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.598857 kubelet[1771]: E0213 19:18:10.598836 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.598857 kubelet[1771]: W0213 19:18:10.598850 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.598912 kubelet[1771]: E0213 19:18:10.598898 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.599014 kubelet[1771]: E0213 19:18:10.599002 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.599014 kubelet[1771]: W0213 19:18:10.599012 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.599063 kubelet[1771]: E0213 19:18:10.599052 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.600773 kubelet[1771]: E0213 19:18:10.600758 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.600773 kubelet[1771]: W0213 19:18:10.600773 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.600907 kubelet[1771]: E0213 19:18:10.600869 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.601053 kubelet[1771]: E0213 19:18:10.600979 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.601053 kubelet[1771]: W0213 19:18:10.600993 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.601053 kubelet[1771]: E0213 19:18:10.601039 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.601187 kubelet[1771]: E0213 19:18:10.601166 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.601187 kubelet[1771]: W0213 19:18:10.601173 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.601249 kubelet[1771]: E0213 19:18:10.601236 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.601359 kubelet[1771]: E0213 19:18:10.601348 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.601359 kubelet[1771]: W0213 19:18:10.601357 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.601462 kubelet[1771]: E0213 19:18:10.601434 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.602422 kubelet[1771]: E0213 19:18:10.602405 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.602457 kubelet[1771]: W0213 19:18:10.602427 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.602517 kubelet[1771]: E0213 19:18:10.602494 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.602753 kubelet[1771]: E0213 19:18:10.602736 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.602753 kubelet[1771]: W0213 19:18:10.602750 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.602833 kubelet[1771]: E0213 19:18:10.602818 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.603030 kubelet[1771]: E0213 19:18:10.603017 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.603058 kubelet[1771]: W0213 19:18:10.603030 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.603176 kubelet[1771]: E0213 19:18:10.603082 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.603430 kubelet[1771]: E0213 19:18:10.603409 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.603482 kubelet[1771]: W0213 19:18:10.603425 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.603482 kubelet[1771]: E0213 19:18:10.603473 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.603734 kubelet[1771]: E0213 19:18:10.603668 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.603734 kubelet[1771]: W0213 19:18:10.603683 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.603831 kubelet[1771]: E0213 19:18:10.603812 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604009 kubelet[1771]: E0213 19:18:10.603985 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604009 kubelet[1771]: W0213 19:18:10.604006 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604078 kubelet[1771]: E0213 19:18:10.604028 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604285 kubelet[1771]: E0213 19:18:10.604185 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604285 kubelet[1771]: W0213 19:18:10.604200 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604285 kubelet[1771]: E0213 19:18:10.604221 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604408 kubelet[1771]: E0213 19:18:10.604396 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604408 kubelet[1771]: W0213 19:18:10.604406 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604525 kubelet[1771]: E0213 19:18:10.604481 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604575 kubelet[1771]: E0213 19:18:10.604561 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604575 kubelet[1771]: W0213 19:18:10.604572 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604689 kubelet[1771]: E0213 19:18:10.604621 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604710 kubelet[1771]: E0213 19:18:10.604702 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604710 kubelet[1771]: W0213 19:18:10.604709 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604789 kubelet[1771]: E0213 19:18:10.604768 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.604869 kubelet[1771]: E0213 19:18:10.604848 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.604869 kubelet[1771]: W0213 19:18:10.604858 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.604977 kubelet[1771]: E0213 19:18:10.604892 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605024 kubelet[1771]: E0213 19:18:10.605011 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605024 kubelet[1771]: W0213 19:18:10.605020 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605106 kubelet[1771]: E0213 19:18:10.605044 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605154 kubelet[1771]: E0213 19:18:10.605142 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605154 kubelet[1771]: W0213 19:18:10.605152 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605241 kubelet[1771]: E0213 19:18:10.605193 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605292 kubelet[1771]: E0213 19:18:10.605269 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605292 kubelet[1771]: W0213 19:18:10.605276 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605396 kubelet[1771]: E0213 19:18:10.605340 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605421 kubelet[1771]: E0213 19:18:10.605415 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605468 kubelet[1771]: W0213 19:18:10.605422 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605521 kubelet[1771]: E0213 19:18:10.605504 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605592 kubelet[1771]: E0213 19:18:10.605537 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605592 kubelet[1771]: W0213 19:18:10.605543 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605592 kubelet[1771]: E0213 19:18:10.605569 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605798 kubelet[1771]: E0213 19:18:10.605673 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605798 kubelet[1771]: W0213 19:18:10.605697 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605857 kubelet[1771]: E0213 19:18:10.605838 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605857 kubelet[1771]: W0213 19:18:10.605845 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.605982 kubelet[1771]: E0213 19:18:10.605927 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605982 kubelet[1771]: E0213 19:18:10.605951 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.605982 kubelet[1771]: E0213 19:18:10.605974 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.605982 kubelet[1771]: W0213 19:18:10.605981 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606127 kubelet[1771]: E0213 19:18:10.606008 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.606148 kubelet[1771]: E0213 19:18:10.606134 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.606148 kubelet[1771]: W0213 19:18:10.606141 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606148 kubelet[1771]: E0213 19:18:10.606184 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.606305 kubelet[1771]: E0213 19:18:10.606268 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.606305 kubelet[1771]: W0213 19:18:10.606275 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606305 kubelet[1771]: E0213 19:18:10.606326 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.606305 kubelet[1771]: E0213 19:18:10.606424 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.606305 kubelet[1771]: W0213 19:18:10.606431 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606305 kubelet[1771]: E0213 19:18:10.606456 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.606614 kubelet[1771]: E0213 19:18:10.606586 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.606614 kubelet[1771]: W0213 19:18:10.606593 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606614 kubelet[1771]: E0213 19:18:10.606614 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.606823 kubelet[1771]: E0213 19:18:10.606811 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.606823 kubelet[1771]: W0213 19:18:10.606822 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.606868 kubelet[1771]: E0213 19:18:10.606856 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.607319 kubelet[1771]: E0213 19:18:10.607122 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.607319 kubelet[1771]: W0213 19:18:10.607140 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.607319 kubelet[1771]: E0213 19:18:10.607185 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.607319 kubelet[1771]: E0213 19:18:10.607295 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.607319 kubelet[1771]: W0213 19:18:10.607303 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.607319 kubelet[1771]: E0213 19:18:10.607324 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.607485 kubelet[1771]: E0213 19:18:10.607442 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.607485 kubelet[1771]: W0213 19:18:10.607450 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.607485 kubelet[1771]: E0213 19:18:10.607471 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.607669 kubelet[1771]: E0213 19:18:10.607649 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.607669 kubelet[1771]: W0213 19:18:10.607660 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.607727 kubelet[1771]: E0213 19:18:10.607714 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.607890 kubelet[1771]: E0213 19:18:10.607816 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.607890 kubelet[1771]: W0213 19:18:10.607827 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.608006 kubelet[1771]: E0213 19:18:10.607919 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.608265 kubelet[1771]: E0213 19:18:10.608239 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.608265 kubelet[1771]: W0213 19:18:10.608264 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.608554 kubelet[1771]: E0213 19:18:10.608537 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.608554 kubelet[1771]: W0213 19:18:10.608551 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608731 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.609235 kubelet[1771]: W0213 19:18:10.608745 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608905 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.609235 kubelet[1771]: W0213 19:18:10.608913 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608926 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608958 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608972 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.608983 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.609235 kubelet[1771]: E0213 19:18:10.609098 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.609235 kubelet[1771]: W0213 19:18:10.609106 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.609762 kubelet[1771]: E0213 19:18:10.609197 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.609762 kubelet[1771]: E0213 19:18:10.609386 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.609762 kubelet[1771]: W0213 19:18:10.609396 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.609762 kubelet[1771]: E0213 19:18:10.609429 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.610001 kubelet[1771]: E0213 19:18:10.609933 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.610001 kubelet[1771]: W0213 19:18:10.609946 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.610001 kubelet[1771]: E0213 19:18:10.609984 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.610528 kubelet[1771]: E0213 19:18:10.610511 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.610673 kubelet[1771]: W0213 19:18:10.610611 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.610673 kubelet[1771]: E0213 19:18:10.610639 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.614419 kubelet[1771]: E0213 19:18:10.614302 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.614419 kubelet[1771]: W0213 19:18:10.614323 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.614419 kubelet[1771]: E0213 19:18:10.614346 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.614625 kubelet[1771]: E0213 19:18:10.614595 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.614625 kubelet[1771]: W0213 19:18:10.614612 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.614712 kubelet[1771]: E0213 19:18:10.614626 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.623761 kubelet[1771]: E0213 19:18:10.623721 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:10.623761 kubelet[1771]: W0213 19:18:10.623746 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:10.623761 kubelet[1771]: E0213 19:18:10.623766 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:10.807455 containerd[1465]: time="2025-02-13T19:18:10.806657457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5crj6,Uid:d50d3918-d346-4f85-a385-6a1e73bc9631,Namespace:calico-system,Attempt:0,}" Feb 13 19:18:10.811435 containerd[1465]: time="2025-02-13T19:18:10.811293334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hk4q,Uid:e89d1f70-7062-4a27-b8bf-c11b132a1d84,Namespace:kube-system,Attempt:0,}" Feb 13 19:18:11.292184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4181422571.mount: Deactivated successfully. Feb 13 19:18:11.298474 containerd[1465]: time="2025-02-13T19:18:11.298423796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:11.299688 containerd[1465]: time="2025-02-13T19:18:11.299656319Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 19:18:11.300270 containerd[1465]: time="2025-02-13T19:18:11.300214029Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:11.301517 containerd[1465]: time="2025-02-13T19:18:11.301481884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:18:11.302155 containerd[1465]: time="2025-02-13T19:18:11.302100762Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:11.305705 containerd[1465]: time="2025-02-13T19:18:11.305636439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:18:11.306704 containerd[1465]: time="2025-02-13T19:18:11.306497652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 499.745915ms" Feb 13 19:18:11.307398 containerd[1465]: time="2025-02-13T19:18:11.307368563Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.984258ms" Feb 13 19:18:11.430432 containerd[1465]: time="2025-02-13T19:18:11.430323020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:11.430432 containerd[1465]: time="2025-02-13T19:18:11.430392517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:11.430432 containerd[1465]: time="2025-02-13T19:18:11.430404268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:11.430432 containerd[1465]: time="2025-02-13T19:18:11.430482618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:11.431458 containerd[1465]: time="2025-02-13T19:18:11.431382584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:11.431578 containerd[1465]: time="2025-02-13T19:18:11.431441820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:11.433358 containerd[1465]: time="2025-02-13T19:18:11.433305695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:11.433659 containerd[1465]: time="2025-02-13T19:18:11.433579661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:11.472639 kubelet[1771]: E0213 19:18:11.472594 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:11.512532 systemd[1]: Started cri-containerd-9278e3cb36fdbc7ff33d28acccee8576ce74c6195747527f35702fa6dd8df2e0.scope - libcontainer container 9278e3cb36fdbc7ff33d28acccee8576ce74c6195747527f35702fa6dd8df2e0. Feb 13 19:18:11.513804 systemd[1]: Started cri-containerd-e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576.scope - libcontainer container e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576. Feb 13 19:18:11.535916 containerd[1465]: time="2025-02-13T19:18:11.535804443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8hk4q,Uid:e89d1f70-7062-4a27-b8bf-c11b132a1d84,Namespace:kube-system,Attempt:0,} returns sandbox id \"9278e3cb36fdbc7ff33d28acccee8576ce74c6195747527f35702fa6dd8df2e0\"" Feb 13 19:18:11.538055 containerd[1465]: time="2025-02-13T19:18:11.538024297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5crj6,Uid:d50d3918-d346-4f85-a385-6a1e73bc9631,Namespace:calico-system,Attempt:0,} returns sandbox id \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\"" Feb 13 19:18:11.541939 containerd[1465]: time="2025-02-13T19:18:11.541714784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:18:11.600607 kubelet[1771]: E0213 19:18:11.600501 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:12.473139 kubelet[1771]: E0213 19:18:12.473094 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:12.487791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2414675551.mount: Deactivated successfully. Feb 13 19:18:12.708755 containerd[1465]: time="2025-02-13T19:18:12.708703209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:12.709722 containerd[1465]: time="2025-02-13T19:18:12.709680268Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=27363384" Feb 13 19:18:12.710614 containerd[1465]: time="2025-02-13T19:18:12.710549560Z" level=info msg="ImageCreate event name:\"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:12.712527 containerd[1465]: time="2025-02-13T19:18:12.712478627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:12.713374 containerd[1465]: time="2025-02-13T19:18:12.713344099Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"27362401\" in 1.171583327s" Feb 13 19:18:12.713556 containerd[1465]: time="2025-02-13T19:18:12.713439682Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:e5aac5df76d9b8dc899ab8c4db25a7648e7fb25cafe7a155066247883c78f062\"" Feb 13 19:18:12.715254 containerd[1465]: time="2025-02-13T19:18:12.715211601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:18:12.716595 containerd[1465]: time="2025-02-13T19:18:12.716373312Z" level=info msg="CreateContainer within sandbox \"9278e3cb36fdbc7ff33d28acccee8576ce74c6195747527f35702fa6dd8df2e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:18:12.730008 containerd[1465]: time="2025-02-13T19:18:12.729892097Z" level=info msg="CreateContainer within sandbox \"9278e3cb36fdbc7ff33d28acccee8576ce74c6195747527f35702fa6dd8df2e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3fcbf07558d55fccc88d989fab5524ba49ffedf83a66b7af346cf1108fa554f6\"" Feb 13 19:18:12.730921 containerd[1465]: time="2025-02-13T19:18:12.730881460Z" level=info msg="StartContainer for \"3fcbf07558d55fccc88d989fab5524ba49ffedf83a66b7af346cf1108fa554f6\"" Feb 13 19:18:12.759480 systemd[1]: Started cri-containerd-3fcbf07558d55fccc88d989fab5524ba49ffedf83a66b7af346cf1108fa554f6.scope - libcontainer container 3fcbf07558d55fccc88d989fab5524ba49ffedf83a66b7af346cf1108fa554f6. Feb 13 19:18:12.783578 containerd[1465]: time="2025-02-13T19:18:12.782317037Z" level=info msg="StartContainer for \"3fcbf07558d55fccc88d989fab5524ba49ffedf83a66b7af346cf1108fa554f6\" returns successfully" Feb 13 19:18:13.474164 kubelet[1771]: E0213 19:18:13.474107 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:13.600333 kubelet[1771]: E0213 19:18:13.600261 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:13.701958 kubelet[1771]: E0213 19:18:13.701906 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.701958 kubelet[1771]: W0213 19:18:13.701941 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.701958 kubelet[1771]: E0213 19:18:13.701961 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702198 kubelet[1771]: E0213 19:18:13.702167 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.702233 kubelet[1771]: W0213 19:18:13.702179 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.702233 kubelet[1771]: E0213 19:18:13.702212 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702390 kubelet[1771]: E0213 19:18:13.702362 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.702390 kubelet[1771]: W0213 19:18:13.702377 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.702390 kubelet[1771]: E0213 19:18:13.702385 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702550 kubelet[1771]: E0213 19:18:13.702521 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.702550 kubelet[1771]: W0213 19:18:13.702537 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.702550 kubelet[1771]: E0213 19:18:13.702545 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702695 kubelet[1771]: E0213 19:18:13.702680 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.702695 kubelet[1771]: W0213 19:18:13.702693 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.702740 kubelet[1771]: E0213 19:18:13.702701 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702834 kubelet[1771]: E0213 19:18:13.702824 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.702857 kubelet[1771]: W0213 19:18:13.702838 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.702857 kubelet[1771]: E0213 19:18:13.702845 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.702991 kubelet[1771]: E0213 19:18:13.702980 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703016 kubelet[1771]: W0213 19:18:13.702996 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703016 kubelet[1771]: E0213 19:18:13.703004 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.703148 kubelet[1771]: E0213 19:18:13.703136 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703174 kubelet[1771]: W0213 19:18:13.703149 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703174 kubelet[1771]: E0213 19:18:13.703157 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.703307 kubelet[1771]: E0213 19:18:13.703297 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703337 kubelet[1771]: W0213 19:18:13.703307 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703337 kubelet[1771]: E0213 19:18:13.703315 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.703447 kubelet[1771]: E0213 19:18:13.703435 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703473 kubelet[1771]: W0213 19:18:13.703451 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703473 kubelet[1771]: E0213 19:18:13.703458 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.703585 kubelet[1771]: E0213 19:18:13.703575 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703610 kubelet[1771]: W0213 19:18:13.703588 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703610 kubelet[1771]: E0213 19:18:13.703596 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.703903 kubelet[1771]: E0213 19:18:13.703869 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.703903 kubelet[1771]: W0213 19:18:13.703886 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.703903 kubelet[1771]: E0213 19:18:13.703899 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704128 kubelet[1771]: E0213 19:18:13.704112 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704128 kubelet[1771]: W0213 19:18:13.704124 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704128 kubelet[1771]: E0213 19:18:13.704133 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704324 kubelet[1771]: E0213 19:18:13.704307 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704324 kubelet[1771]: W0213 19:18:13.704317 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704375 kubelet[1771]: E0213 19:18:13.704330 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704472 kubelet[1771]: E0213 19:18:13.704456 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704496 kubelet[1771]: W0213 19:18:13.704471 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704496 kubelet[1771]: E0213 19:18:13.704479 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704659 kubelet[1771]: E0213 19:18:13.704645 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704659 kubelet[1771]: W0213 19:18:13.704656 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704714 kubelet[1771]: E0213 19:18:13.704663 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704809 kubelet[1771]: E0213 19:18:13.704797 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704834 kubelet[1771]: W0213 19:18:13.704810 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704834 kubelet[1771]: E0213 19:18:13.704818 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.704969 kubelet[1771]: E0213 19:18:13.704955 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.704995 kubelet[1771]: W0213 19:18:13.704969 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.704995 kubelet[1771]: E0213 19:18:13.704976 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.705129 kubelet[1771]: E0213 19:18:13.705116 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.705129 kubelet[1771]: W0213 19:18:13.705128 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.705177 kubelet[1771]: E0213 19:18:13.705136 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.705268 kubelet[1771]: E0213 19:18:13.705257 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.705305 kubelet[1771]: W0213 19:18:13.705282 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.705305 kubelet[1771]: E0213 19:18:13.705293 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.719693 kubelet[1771]: E0213 19:18:13.719659 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.719693 kubelet[1771]: W0213 19:18:13.719680 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.719693 kubelet[1771]: E0213 19:18:13.719694 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.719935 kubelet[1771]: E0213 19:18:13.719908 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.719935 kubelet[1771]: W0213 19:18:13.719921 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.719992 kubelet[1771]: E0213 19:18:13.719937 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.720208 kubelet[1771]: E0213 19:18:13.720184 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.720208 kubelet[1771]: W0213 19:18:13.720197 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.720268 kubelet[1771]: E0213 19:18:13.720211 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.720397 kubelet[1771]: E0213 19:18:13.720385 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.720397 kubelet[1771]: W0213 19:18:13.720395 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.720458 kubelet[1771]: E0213 19:18:13.720408 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.720638 kubelet[1771]: E0213 19:18:13.720610 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.720638 kubelet[1771]: W0213 19:18:13.720622 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.720638 kubelet[1771]: E0213 19:18:13.720635 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.720828 kubelet[1771]: E0213 19:18:13.720817 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.720877 kubelet[1771]: W0213 19:18:13.720828 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.720877 kubelet[1771]: E0213 19:18:13.720845 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.721118 kubelet[1771]: E0213 19:18:13.721086 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.721118 kubelet[1771]: W0213 19:18:13.721109 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.721185 kubelet[1771]: E0213 19:18:13.721129 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.721326 kubelet[1771]: E0213 19:18:13.721314 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.721369 kubelet[1771]: W0213 19:18:13.721327 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.721369 kubelet[1771]: E0213 19:18:13.721344 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.721553 kubelet[1771]: E0213 19:18:13.721543 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.721574 kubelet[1771]: W0213 19:18:13.721553 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.721574 kubelet[1771]: E0213 19:18:13.721566 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.721744 kubelet[1771]: E0213 19:18:13.721733 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.721778 kubelet[1771]: W0213 19:18:13.721744 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.721778 kubelet[1771]: E0213 19:18:13.721757 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.721999 kubelet[1771]: E0213 19:18:13.721981 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.722025 kubelet[1771]: W0213 19:18:13.722001 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.722055 kubelet[1771]: E0213 19:18:13.722021 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.722227 kubelet[1771]: E0213 19:18:13.722217 1771 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:18:13.722254 kubelet[1771]: W0213 19:18:13.722227 1771 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:18:13.722254 kubelet[1771]: E0213 19:18:13.722235 1771 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:18:13.766208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179585305.mount: Deactivated successfully. Feb 13 19:18:13.830860 containerd[1465]: time="2025-02-13T19:18:13.830801776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:13.831596 containerd[1465]: time="2025-02-13T19:18:13.831541302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Feb 13 19:18:13.832187 containerd[1465]: time="2025-02-13T19:18:13.832159188Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:13.834696 containerd[1465]: time="2025-02-13T19:18:13.834650139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:13.835307 containerd[1465]: time="2025-02-13T19:18:13.835142004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.119897636s" Feb 13 19:18:13.835307 containerd[1465]: time="2025-02-13T19:18:13.835170375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Feb 13 19:18:13.836954 containerd[1465]: time="2025-02-13T19:18:13.836928190Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:18:13.848621 containerd[1465]: time="2025-02-13T19:18:13.848469541Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293\"" Feb 13 19:18:13.849078 containerd[1465]: time="2025-02-13T19:18:13.849004847Z" level=info msg="StartContainer for \"a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293\"" Feb 13 19:18:13.883502 systemd[1]: Started cri-containerd-a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293.scope - libcontainer container a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293. Feb 13 19:18:13.910134 containerd[1465]: time="2025-02-13T19:18:13.908883484Z" level=info msg="StartContainer for \"a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293\" returns successfully" Feb 13 19:18:13.950074 systemd[1]: cri-containerd-a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293.scope: Deactivated successfully. Feb 13 19:18:14.121783 containerd[1465]: time="2025-02-13T19:18:14.121586297Z" level=info msg="shim disconnected" id=a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293 namespace=k8s.io Feb 13 19:18:14.121783 containerd[1465]: time="2025-02-13T19:18:14.121643407Z" level=warning msg="cleaning up after shim disconnected" id=a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293 namespace=k8s.io Feb 13 19:18:14.121783 containerd[1465]: time="2025-02-13T19:18:14.121651681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:14.474926 kubelet[1771]: E0213 19:18:14.474810 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:14.618486 containerd[1465]: time="2025-02-13T19:18:14.618434920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:18:14.637971 kubelet[1771]: I0213 19:18:14.637668 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8hk4q" podStartSLOduration=5.46442359 podStartE2EDuration="6.637649472s" podCreationTimestamp="2025-02-13 19:18:08 +0000 UTC" firstStartedPulling="2025-02-13 19:18:11.541219288 +0000 UTC m=+3.919096283" lastFinishedPulling="2025-02-13 19:18:12.71444517 +0000 UTC m=+5.092322165" observedRunningTime="2025-02-13 19:18:13.625003298 +0000 UTC m=+6.002880293" watchObservedRunningTime="2025-02-13 19:18:14.637649472 +0000 UTC m=+7.015526467" Feb 13 19:18:14.748034 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a1b8a7af6f95fdab6d79ec1e4c339222368dc3fbfda373cb03adc4e6ae5c8293-rootfs.mount: Deactivated successfully. Feb 13 19:18:15.475244 kubelet[1771]: E0213 19:18:15.475194 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:15.600868 kubelet[1771]: E0213 19:18:15.600782 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:16.476207 kubelet[1771]: E0213 19:18:16.476171 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:16.607779 containerd[1465]: time="2025-02-13T19:18:16.607725501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:16.608417 containerd[1465]: time="2025-02-13T19:18:16.608380485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Feb 13 19:18:16.609215 containerd[1465]: time="2025-02-13T19:18:16.609172010Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:16.611162 containerd[1465]: time="2025-02-13T19:18:16.611103458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:16.611994 containerd[1465]: time="2025-02-13T19:18:16.611961630Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.993475551s" Feb 13 19:18:16.612052 containerd[1465]: time="2025-02-13T19:18:16.611996498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Feb 13 19:18:16.613986 containerd[1465]: time="2025-02-13T19:18:16.613949291Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:18:16.625306 containerd[1465]: time="2025-02-13T19:18:16.625241262Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c\"" Feb 13 19:18:16.625794 containerd[1465]: time="2025-02-13T19:18:16.625748389Z" level=info msg="StartContainer for \"7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c\"" Feb 13 19:18:16.654478 systemd[1]: Started cri-containerd-7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c.scope - libcontainer container 7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c. Feb 13 19:18:16.681143 containerd[1465]: time="2025-02-13T19:18:16.681098333Z" level=info msg="StartContainer for \"7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c\" returns successfully" Feb 13 19:18:17.113721 containerd[1465]: time="2025-02-13T19:18:17.113609268Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:17.115322 systemd[1]: cri-containerd-7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c.scope: Deactivated successfully. Feb 13 19:18:17.115599 systemd[1]: cri-containerd-7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c.scope: Consumed 442ms CPU time, 169.2M memory peak, 147.4M written to disk. Feb 13 19:18:17.131853 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c-rootfs.mount: Deactivated successfully. Feb 13 19:18:17.162373 kubelet[1771]: I0213 19:18:17.161313 1771 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:18:17.367167 containerd[1465]: time="2025-02-13T19:18:17.367006788Z" level=info msg="shim disconnected" id=7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c namespace=k8s.io Feb 13 19:18:17.367167 containerd[1465]: time="2025-02-13T19:18:17.367081991Z" level=warning msg="cleaning up after shim disconnected" id=7204062b463f2614c80335acbfbff72ec3e85e7c0ea7fc40ee70958c8bfe624c namespace=k8s.io Feb 13 19:18:17.367167 containerd[1465]: time="2025-02-13T19:18:17.367094224Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:18:17.476649 kubelet[1771]: E0213 19:18:17.476602 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:17.605138 systemd[1]: Created slice kubepods-besteffort-pod2cd1dfd2_2e34_43a6_ae60_791be44ed577.slice - libcontainer container kubepods-besteffort-pod2cd1dfd2_2e34_43a6_ae60_791be44ed577.slice. Feb 13 19:18:17.607157 containerd[1465]: time="2025-02-13T19:18:17.607061460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:0,}" Feb 13 19:18:17.643596 containerd[1465]: time="2025-02-13T19:18:17.643313029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:18:17.739247 containerd[1465]: time="2025-02-13T19:18:17.739188421Z" level=error msg="Failed to destroy network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:17.739642 containerd[1465]: time="2025-02-13T19:18:17.739604466Z" level=error msg="encountered an error cleaning up failed sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:17.739703 containerd[1465]: time="2025-02-13T19:18:17.739683881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:17.740727 kubelet[1771]: E0213 19:18:17.740362 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:17.740727 kubelet[1771]: E0213 19:18:17.740431 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:17.740727 kubelet[1771]: E0213 19:18:17.740452 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:17.740856 kubelet[1771]: E0213 19:18:17.740492 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:17.741215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51-shm.mount: Deactivated successfully. Feb 13 19:18:18.477730 kubelet[1771]: E0213 19:18:18.477678 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:18.647959 kubelet[1771]: I0213 19:18:18.647790 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51" Feb 13 19:18:18.649136 containerd[1465]: time="2025-02-13T19:18:18.648399566Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\"" Feb 13 19:18:18.649136 containerd[1465]: time="2025-02-13T19:18:18.648582600Z" level=info msg="Ensure that sandbox 45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51 in task-service has been cleanup successfully" Feb 13 19:18:18.649136 containerd[1465]: time="2025-02-13T19:18:18.649104395Z" level=info msg="TearDown network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" successfully" Feb 13 19:18:18.649136 containerd[1465]: time="2025-02-13T19:18:18.649121676Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" returns successfully" Feb 13 19:18:18.650551 systemd[1]: run-netns-cni\x2dd7358aef\x2dcb3f\x2d5f5b\x2dd8d6\x2d4415e576f99f.mount: Deactivated successfully. Feb 13 19:18:18.651584 containerd[1465]: time="2025-02-13T19:18:18.651166677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:1,}" Feb 13 19:18:18.741298 containerd[1465]: time="2025-02-13T19:18:18.741160368Z" level=error msg="Failed to destroy network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:18.742820 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba-shm.mount: Deactivated successfully. Feb 13 19:18:18.744030 containerd[1465]: time="2025-02-13T19:18:18.743862846Z" level=error msg="encountered an error cleaning up failed sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:18.744030 containerd[1465]: time="2025-02-13T19:18:18.743936420Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:18.744666 kubelet[1771]: E0213 19:18:18.744169 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:18.744666 kubelet[1771]: E0213 19:18:18.744235 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:18.744666 kubelet[1771]: E0213 19:18:18.744264 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:18.746136 kubelet[1771]: E0213 19:18:18.744333 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:19.477955 kubelet[1771]: E0213 19:18:19.477837 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:19.655843 kubelet[1771]: I0213 19:18:19.655199 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba" Feb 13 19:18:19.655971 containerd[1465]: time="2025-02-13T19:18:19.655770818Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\"" Feb 13 19:18:19.655971 containerd[1465]: time="2025-02-13T19:18:19.655935800Z" level=info msg="Ensure that sandbox 8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba in task-service has been cleanup successfully" Feb 13 19:18:19.657606 systemd[1]: run-netns-cni\x2d7d46401a\x2d49c2\x2d6b66\x2d6327\x2d61896b8f202f.mount: Deactivated successfully. Feb 13 19:18:19.657724 containerd[1465]: time="2025-02-13T19:18:19.657609468Z" level=info msg="TearDown network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" successfully" Feb 13 19:18:19.657724 containerd[1465]: time="2025-02-13T19:18:19.657635201Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" returns successfully" Feb 13 19:18:19.658016 containerd[1465]: time="2025-02-13T19:18:19.657979715Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\"" Feb 13 19:18:19.658076 containerd[1465]: time="2025-02-13T19:18:19.658063929Z" level=info msg="TearDown network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" successfully" Feb 13 19:18:19.658099 containerd[1465]: time="2025-02-13T19:18:19.658075353Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" returns successfully" Feb 13 19:18:19.659418 containerd[1465]: time="2025-02-13T19:18:19.659099836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:2,}" Feb 13 19:18:19.718407 containerd[1465]: time="2025-02-13T19:18:19.718362273Z" level=error msg="Failed to destroy network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:19.720392 containerd[1465]: time="2025-02-13T19:18:19.718995064Z" level=error msg="encountered an error cleaning up failed sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:19.720392 containerd[1465]: time="2025-02-13T19:18:19.719057353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:19.720258 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c-shm.mount: Deactivated successfully. Feb 13 19:18:19.720551 kubelet[1771]: E0213 19:18:19.719293 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:19.720551 kubelet[1771]: E0213 19:18:19.719357 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:19.720551 kubelet[1771]: E0213 19:18:19.719375 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:19.720649 kubelet[1771]: E0213 19:18:19.719421 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:20.449633 systemd[1]: Created slice kubepods-besteffort-pod844c41a5_dcd7_47ee_9f1c_6fdaf5e1c2f7.slice - libcontainer container kubepods-besteffort-pod844c41a5_dcd7_47ee_9f1c_6fdaf5e1c2f7.slice. Feb 13 19:18:20.458266 kubelet[1771]: I0213 19:18:20.458196 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvhs9\" (UniqueName: \"kubernetes.io/projected/844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7-kube-api-access-kvhs9\") pod \"nginx-deployment-7fcdb87857-m5g4l\" (UID: \"844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7\") " pod="default/nginx-deployment-7fcdb87857-m5g4l" Feb 13 19:18:20.478719 kubelet[1771]: E0213 19:18:20.478672 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:20.662653 kubelet[1771]: I0213 19:18:20.662612 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c" Feb 13 19:18:20.663498 containerd[1465]: time="2025-02-13T19:18:20.663347550Z" level=info msg="StopPodSandbox for \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\"" Feb 13 19:18:20.665314 containerd[1465]: time="2025-02-13T19:18:20.664140268Z" level=info msg="Ensure that sandbox d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c in task-service has been cleanup successfully" Feb 13 19:18:20.665314 containerd[1465]: time="2025-02-13T19:18:20.664402063Z" level=info msg="TearDown network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\" successfully" Feb 13 19:18:20.665314 containerd[1465]: time="2025-02-13T19:18:20.664418933Z" level=info msg="StopPodSandbox for \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\" returns successfully" Feb 13 19:18:20.666196 systemd[1]: run-netns-cni\x2d7e150c6b\x2dc255\x2dbcbb\x2d31ec\x2df25b0f00336c.mount: Deactivated successfully. Feb 13 19:18:20.666299 containerd[1465]: time="2025-02-13T19:18:20.666219839Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\"" Feb 13 19:18:20.666335 containerd[1465]: time="2025-02-13T19:18:20.666324469Z" level=info msg="TearDown network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" successfully" Feb 13 19:18:20.666360 containerd[1465]: time="2025-02-13T19:18:20.666336651Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" returns successfully" Feb 13 19:18:20.667268 containerd[1465]: time="2025-02-13T19:18:20.667065132Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\"" Feb 13 19:18:20.667364 containerd[1465]: time="2025-02-13T19:18:20.667346162Z" level=info msg="TearDown network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" successfully" Feb 13 19:18:20.667364 containerd[1465]: time="2025-02-13T19:18:20.667362632Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" returns successfully" Feb 13 19:18:20.667826 containerd[1465]: time="2025-02-13T19:18:20.667801828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:3,}" Feb 13 19:18:20.744591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889261072.mount: Deactivated successfully. Feb 13 19:18:20.752622 containerd[1465]: time="2025-02-13T19:18:20.752588913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m5g4l,Uid:844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7,Namespace:default,Attempt:0,}" Feb 13 19:18:20.922900 containerd[1465]: time="2025-02-13T19:18:20.922786134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:20.927324 containerd[1465]: time="2025-02-13T19:18:20.927203024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Feb 13 19:18:20.932191 containerd[1465]: time="2025-02-13T19:18:20.932145948Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:20.937402 containerd[1465]: time="2025-02-13T19:18:20.937362649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:20.939632 containerd[1465]: time="2025-02-13T19:18:20.939498242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.29612049s" Feb 13 19:18:20.939632 containerd[1465]: time="2025-02-13T19:18:20.939532344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Feb 13 19:18:20.947568 containerd[1465]: time="2025-02-13T19:18:20.947526281Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:18:20.979630 containerd[1465]: time="2025-02-13T19:18:20.978680541Z" level=error msg="Failed to destroy network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.979630 containerd[1465]: time="2025-02-13T19:18:20.979001844Z" level=error msg="encountered an error cleaning up failed sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.979630 containerd[1465]: time="2025-02-13T19:18:20.979058787Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m5g4l,Uid:844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.979830 kubelet[1771]: E0213 19:18:20.979252 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.979830 kubelet[1771]: E0213 19:18:20.979330 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-m5g4l" Feb 13 19:18:20.979830 kubelet[1771]: E0213 19:18:20.979364 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-m5g4l" Feb 13 19:18:20.979913 kubelet[1771]: E0213 19:18:20.979406 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-m5g4l_default(844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-m5g4l_default(844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-m5g4l" podUID="844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7" Feb 13 19:18:20.981482 containerd[1465]: time="2025-02-13T19:18:20.981449402Z" level=error msg="Failed to destroy network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.981743 containerd[1465]: time="2025-02-13T19:18:20.981721095Z" level=error msg="encountered an error cleaning up failed sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.981791 containerd[1465]: time="2025-02-13T19:18:20.981768421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.981974 kubelet[1771]: E0213 19:18:20.981936 1771 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:18:20.982026 kubelet[1771]: E0213 19:18:20.981991 1771 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:20.982026 kubelet[1771]: E0213 19:18:20.982012 1771 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-czrzd" Feb 13 19:18:20.982113 kubelet[1771]: E0213 19:18:20.982047 1771 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-czrzd_calico-system(2cd1dfd2-2e34-43a6-ae60-791be44ed577)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-czrzd" podUID="2cd1dfd2-2e34-43a6-ae60-791be44ed577" Feb 13 19:18:21.016701 containerd[1465]: time="2025-02-13T19:18:21.016646556Z" level=info msg="CreateContainer within sandbox \"e772e4c36ab8e998c1e8ae5c5a1baa6470131b2e80fde55a26bb286ff8cf0576\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b4ff28f2e178ff0411f36e96c7a4eedd0523fecb98666f5f717c56264c5f842f\"" Feb 13 19:18:21.017433 containerd[1465]: time="2025-02-13T19:18:21.017399831Z" level=info msg="StartContainer for \"b4ff28f2e178ff0411f36e96c7a4eedd0523fecb98666f5f717c56264c5f842f\"" Feb 13 19:18:21.046454 systemd[1]: Started cri-containerd-b4ff28f2e178ff0411f36e96c7a4eedd0523fecb98666f5f717c56264c5f842f.scope - libcontainer container b4ff28f2e178ff0411f36e96c7a4eedd0523fecb98666f5f717c56264c5f842f. Feb 13 19:18:21.072605 containerd[1465]: time="2025-02-13T19:18:21.072556216Z" level=info msg="StartContainer for \"b4ff28f2e178ff0411f36e96c7a4eedd0523fecb98666f5f717c56264c5f842f\" returns successfully" Feb 13 19:18:21.217616 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:18:21.217728 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:18:21.479309 kubelet[1771]: E0213 19:18:21.479166 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:21.668888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c-shm.mount: Deactivated successfully. Feb 13 19:18:21.669029 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6-shm.mount: Deactivated successfully. Feb 13 19:18:21.672538 kubelet[1771]: I0213 19:18:21.672511 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c" Feb 13 19:18:21.674257 kubelet[1771]: I0213 19:18:21.674229 1771 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6" Feb 13 19:18:21.674334 containerd[1465]: time="2025-02-13T19:18:21.674195785Z" level=info msg="StopPodSandbox for \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\"" Feb 13 19:18:21.674581 containerd[1465]: time="2025-02-13T19:18:21.674547664Z" level=info msg="Ensure that sandbox f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c in task-service has been cleanup successfully" Feb 13 19:18:21.674733 containerd[1465]: time="2025-02-13T19:18:21.674655675Z" level=info msg="StopPodSandbox for \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\"" Feb 13 19:18:21.674872 containerd[1465]: time="2025-02-13T19:18:21.674831995Z" level=info msg="TearDown network for sandbox \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\" successfully" Feb 13 19:18:21.674872 containerd[1465]: time="2025-02-13T19:18:21.674851106Z" level=info msg="StopPodSandbox for \"f6db5f3569b5ac69727d222ff2460b8839f139b847b8cf8d2f95ca9e56204f0c\" returns successfully" Feb 13 19:18:21.674932 containerd[1465]: time="2025-02-13T19:18:21.674868253Z" level=info msg="Ensure that sandbox 456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6 in task-service has been cleanup successfully" Feb 13 19:18:21.675360 containerd[1465]: time="2025-02-13T19:18:21.675145613Z" level=info msg="TearDown network for sandbox \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\" successfully" Feb 13 19:18:21.675360 containerd[1465]: time="2025-02-13T19:18:21.675344088Z" level=info msg="StopPodSandbox for \"456dcec9b80186a51b7ae7e8f8787279112f2ecff1ee860817cfb3aff35b48c6\" returns successfully" Feb 13 19:18:21.675444 containerd[1465]: time="2025-02-13T19:18:21.675248617Z" level=info msg="StopPodSandbox for \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\"" Feb 13 19:18:21.675467 containerd[1465]: time="2025-02-13T19:18:21.675441923Z" level=info msg="TearDown network for sandbox \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\" successfully" Feb 13 19:18:21.675467 containerd[1465]: time="2025-02-13T19:18:21.675451939Z" level=info msg="StopPodSandbox for \"d33552daeca4b57e0c4e10298f9e2bf2e97558fd34a868c85a4ac2d8ef7ec45c\" returns successfully" Feb 13 19:18:21.677163 containerd[1465]: time="2025-02-13T19:18:21.676633214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m5g4l,Uid:844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7,Namespace:default,Attempt:1,}" Feb 13 19:18:21.677163 containerd[1465]: time="2025-02-13T19:18:21.676658975Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\"" Feb 13 19:18:21.677163 containerd[1465]: time="2025-02-13T19:18:21.676731130Z" level=info msg="TearDown network for sandbox \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" successfully" Feb 13 19:18:21.677163 containerd[1465]: time="2025-02-13T19:18:21.676740184Z" level=info msg="StopPodSandbox for \"8b6e675cc13909f1d0d29b130b38a3a6976bb1a8e0042cb4831441094ee222ba\" returns successfully" Feb 13 19:18:21.676823 systemd[1]: run-netns-cni\x2d23145328\x2d2dd0\x2d2b49\x2da8b7\x2d0de61dbd2073.mount: Deactivated successfully. Feb 13 19:18:21.677642 containerd[1465]: time="2025-02-13T19:18:21.677261131Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\"" Feb 13 19:18:21.677642 containerd[1465]: time="2025-02-13T19:18:21.677356682Z" level=info msg="TearDown network for sandbox \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" successfully" Feb 13 19:18:21.677642 containerd[1465]: time="2025-02-13T19:18:21.677369182Z" level=info msg="StopPodSandbox for \"45ab9cccb488abacad3e42855cae223711aca5bb23b67bff6f07962d22e78b51\" returns successfully" Feb 13 19:18:21.676910 systemd[1]: run-netns-cni\x2d5df86cfe\x2dd6d2\x2d7c74\x2ddb74\x2d4c38a32d0aac.mount: Deactivated successfully. Feb 13 19:18:21.678632 containerd[1465]: time="2025-02-13T19:18:21.678600897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:4,}" Feb 13 19:18:21.850214 systemd-networkd[1396]: calidea84529f11: Link UP Feb 13 19:18:21.850442 systemd-networkd[1396]: calidea84529f11: Gained carrier Feb 13 19:18:21.854969 kubelet[1771]: I0213 19:18:21.854704 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5crj6" podStartSLOduration=4.45690796 podStartE2EDuration="13.854684059s" podCreationTimestamp="2025-02-13 19:18:08 +0000 UTC" firstStartedPulling="2025-02-13 19:18:11.542569759 +0000 UTC m=+3.920446755" lastFinishedPulling="2025-02-13 19:18:20.940345899 +0000 UTC m=+13.318222854" observedRunningTime="2025-02-13 19:18:21.683715095 +0000 UTC m=+14.061592130" watchObservedRunningTime="2025-02-13 19:18:21.854684059 +0000 UTC m=+14.232561054" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.712 [INFO][2591] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.734 [INFO][2591] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0 nginx-deployment-7fcdb87857- default 844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7 999 0 2025-02-13 19:18:20 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.114 nginx-deployment-7fcdb87857-m5g4l eth0 default [] [] [kns.default ksa.default.default] calidea84529f11 [] []}} ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.734 [INFO][2591] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.800 [INFO][2628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" HandleID="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" HandleID="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502a90), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"nginx-deployment-7fcdb87857-m5g4l", "timestamp":"2025-02-13 19:18:21.800680383 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2628] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.817 [INFO][2628] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.821 [INFO][2628] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.826 [INFO][2628] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.828 [INFO][2628] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.829 [INFO][2628] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.829 [INFO][2628] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.833 [INFO][2628] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5 Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.837 [INFO][2628] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2628] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.129/26] block=192.168.101.128/26 handle="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2628] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.129/26] handle="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" host="10.0.0.114" Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:18:21.856084 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.129/26] IPv6=[] ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" HandleID="k8s-pod-network.334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Workload="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.844 [INFO][2591] cni-plugin/k8s.go 386: Populated endpoint ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-m5g4l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidea84529f11", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.844 [INFO][2591] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.129/32] ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.844 [INFO][2591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidea84529f11 ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.849 [INFO][2591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.849 [INFO][2591] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5", Pod:"nginx-deployment-7fcdb87857-m5g4l", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calidea84529f11", MAC:"22:80:3b:44:e0:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:21.856574 containerd[1465]: 2025-02-13 19:18:21.854 [INFO][2591] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5" Namespace="default" Pod="nginx-deployment-7fcdb87857-m5g4l" WorkloadEndpoint="10.0.0.114-k8s-nginx--deployment--7fcdb87857--m5g4l-eth0" Feb 13 19:18:21.871759 containerd[1465]: time="2025-02-13T19:18:21.871633721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:21.871759 containerd[1465]: time="2025-02-13T19:18:21.871686886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:21.871759 containerd[1465]: time="2025-02-13T19:18:21.871702711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:21.872071 containerd[1465]: time="2025-02-13T19:18:21.872011641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:21.888471 systemd[1]: Started cri-containerd-334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5.scope - libcontainer container 334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5. Feb 13 19:18:21.897776 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:21.912983 containerd[1465]: time="2025-02-13T19:18:21.912947735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-m5g4l,Uid:844c41a5-dcd7-47ee-9f1c-6fdaf5e1c2f7,Namespace:default,Attempt:1,} returns sandbox id \"334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5\"" Feb 13 19:18:21.914695 containerd[1465]: time="2025-02-13T19:18:21.914655165Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:18:21.949749 systemd-networkd[1396]: cali076c8a9470c: Link UP Feb 13 19:18:21.949910 systemd-networkd[1396]: cali076c8a9470c: Gained carrier Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.711 [INFO][2598] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.734 [INFO][2598] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-csi--node--driver--czrzd-eth0 csi-node-driver- calico-system 2cd1dfd2-2e34-43a6-ae60-791be44ed577 843 0 2025-02-13 19:18:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.114 csi-node-driver-czrzd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali076c8a9470c [] []}} ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.734 [INFO][2598] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.800 [INFO][2627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" HandleID="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Workload="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" HandleID="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Workload="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000287480), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.114", "pod":"csi-node-driver-czrzd", "timestamp":"2025-02-13 19:18:21.800681545 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.815 [INFO][2627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.842 [INFO][2627] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.918 [INFO][2627] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.922 [INFO][2627] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.926 [INFO][2627] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.928 [INFO][2627] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.931 [INFO][2627] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.931 [INFO][2627] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.932 [INFO][2627] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55 Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.936 [INFO][2627] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.946 [INFO][2627] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.130/26] block=192.168.101.128/26 handle="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.947 [INFO][2627] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.130/26] handle="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" host="10.0.0.114" Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.947 [INFO][2627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:18:21.961021 containerd[1465]: 2025-02-13 19:18:21.947 [INFO][2627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.130/26] IPv6=[] ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" HandleID="k8s-pod-network.31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Workload="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.948 [INFO][2598] cni-plugin/k8s.go 386: Populated endpoint ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-csi--node--driver--czrzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cd1dfd2-2e34-43a6-ae60-791be44ed577", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"csi-node-driver-czrzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali076c8a9470c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.948 [INFO][2598] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.130/32] ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.948 [INFO][2598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali076c8a9470c ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.950 [INFO][2598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.950 [INFO][2598] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-csi--node--driver--czrzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2cd1dfd2-2e34-43a6-ae60-791be44ed577", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55", Pod:"csi-node-driver-czrzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali076c8a9470c", MAC:"1e:6e:84:84:8b:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:21.961669 containerd[1465]: 2025-02-13 19:18:21.959 [INFO][2598] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55" Namespace="calico-system" Pod="csi-node-driver-czrzd" WorkloadEndpoint="10.0.0.114-k8s-csi--node--driver--czrzd-eth0" Feb 13 19:18:21.976316 containerd[1465]: time="2025-02-13T19:18:21.976210547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:21.976483 containerd[1465]: time="2025-02-13T19:18:21.976326130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:21.976483 containerd[1465]: time="2025-02-13T19:18:21.976343117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:21.976959 containerd[1465]: time="2025-02-13T19:18:21.976715709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:21.992445 systemd[1]: Started cri-containerd-31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55.scope - libcontainer container 31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55. Feb 13 19:18:22.001194 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:22.010757 containerd[1465]: time="2025-02-13T19:18:22.010719932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-czrzd,Uid:2cd1dfd2-2e34-43a6-ae60-791be44ed577,Namespace:calico-system,Attempt:4,} returns sandbox id \"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55\"" Feb 13 19:18:22.480231 kubelet[1771]: E0213 19:18:22.480162 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:22.593306 kernel: bpftool[2876]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:18:22.746343 systemd-networkd[1396]: vxlan.calico: Link UP Feb 13 19:18:22.746348 systemd-networkd[1396]: vxlan.calico: Gained carrier Feb 13 19:18:23.138500 systemd-networkd[1396]: calidea84529f11: Gained IPv6LL Feb 13 19:18:23.138741 systemd-networkd[1396]: cali076c8a9470c: Gained IPv6LL Feb 13 19:18:23.480760 kubelet[1771]: E0213 19:18:23.480520 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:23.841484 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Feb 13 19:18:23.954466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167915866.mount: Deactivated successfully. Feb 13 19:18:24.484189 kubelet[1771]: E0213 19:18:24.484065 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:24.852659 containerd[1465]: time="2025-02-13T19:18:24.852608300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.853799 containerd[1465]: time="2025-02-13T19:18:24.853343482Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 19:18:24.854603 containerd[1465]: time="2025-02-13T19:18:24.854555291Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.857465 containerd[1465]: time="2025-02-13T19:18:24.857401039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:24.858513 containerd[1465]: time="2025-02-13T19:18:24.858391973Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.943699432s" Feb 13 19:18:24.858513 containerd[1465]: time="2025-02-13T19:18:24.858424447Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:18:24.860286 containerd[1465]: time="2025-02-13T19:18:24.860034440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:18:24.861111 containerd[1465]: time="2025-02-13T19:18:24.861074226Z" level=info msg="CreateContainer within sandbox \"334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:18:24.877455 containerd[1465]: time="2025-02-13T19:18:24.877405521Z" level=info msg="CreateContainer within sandbox \"334b1543d152250d796c2fc815f67d5805a4fce52d2ac9fe086cea430205c0a5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"07c80eb9b0d45680de4981a17d024d9a8f83ba6768eaca75a1e5181b26719ca4\"" Feb 13 19:18:24.878123 containerd[1465]: time="2025-02-13T19:18:24.877898725Z" level=info msg="StartContainer for \"07c80eb9b0d45680de4981a17d024d9a8f83ba6768eaca75a1e5181b26719ca4\"" Feb 13 19:18:24.968455 systemd[1]: Started cri-containerd-07c80eb9b0d45680de4981a17d024d9a8f83ba6768eaca75a1e5181b26719ca4.scope - libcontainer container 07c80eb9b0d45680de4981a17d024d9a8f83ba6768eaca75a1e5181b26719ca4. Feb 13 19:18:25.008307 containerd[1465]: time="2025-02-13T19:18:25.007905675Z" level=info msg="StartContainer for \"07c80eb9b0d45680de4981a17d024d9a8f83ba6768eaca75a1e5181b26719ca4\" returns successfully" Feb 13 19:18:25.484610 kubelet[1771]: E0213 19:18:25.484561 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:25.707331 kubelet[1771]: I0213 19:18:25.707257 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-m5g4l" podStartSLOduration=2.7620473800000003 podStartE2EDuration="5.707242001s" podCreationTimestamp="2025-02-13 19:18:20 +0000 UTC" firstStartedPulling="2025-02-13 19:18:21.914153649 +0000 UTC m=+14.292030644" lastFinishedPulling="2025-02-13 19:18:24.85934827 +0000 UTC m=+17.237225265" observedRunningTime="2025-02-13 19:18:25.707103112 +0000 UTC m=+18.084980107" watchObservedRunningTime="2025-02-13 19:18:25.707242001 +0000 UTC m=+18.085118956" Feb 13 19:18:25.900765 containerd[1465]: time="2025-02-13T19:18:25.900714565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:25.901727 containerd[1465]: time="2025-02-13T19:18:25.901509265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Feb 13 19:18:25.902558 containerd[1465]: time="2025-02-13T19:18:25.902515322Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:25.904812 containerd[1465]: time="2025-02-13T19:18:25.904778909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:25.905887 containerd[1465]: time="2025-02-13T19:18:25.905848865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.045783273s" Feb 13 19:18:25.905887 containerd[1465]: time="2025-02-13T19:18:25.905886220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Feb 13 19:18:25.910179 containerd[1465]: time="2025-02-13T19:18:25.910138419Z" level=info msg="CreateContainer within sandbox \"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:18:25.921657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3873279452.mount: Deactivated successfully. Feb 13 19:18:25.924874 containerd[1465]: time="2025-02-13T19:18:25.924824371Z" level=info msg="CreateContainer within sandbox \"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a02e9f25dc34b89cfb3c68c76cac55e6729857e7caea49e1939bf98ac9a0577\"" Feb 13 19:18:25.925591 containerd[1465]: time="2025-02-13T19:18:25.925373883Z" level=info msg="StartContainer for \"8a02e9f25dc34b89cfb3c68c76cac55e6729857e7caea49e1939bf98ac9a0577\"" Feb 13 19:18:25.952474 systemd[1]: Started cri-containerd-8a02e9f25dc34b89cfb3c68c76cac55e6729857e7caea49e1939bf98ac9a0577.scope - libcontainer container 8a02e9f25dc34b89cfb3c68c76cac55e6729857e7caea49e1939bf98ac9a0577. Feb 13 19:18:25.984763 containerd[1465]: time="2025-02-13T19:18:25.984709245Z" level=info msg="StartContainer for \"8a02e9f25dc34b89cfb3c68c76cac55e6729857e7caea49e1939bf98ac9a0577\" returns successfully" Feb 13 19:18:25.986036 containerd[1465]: time="2025-02-13T19:18:25.985989717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:18:26.485738 kubelet[1771]: E0213 19:18:26.485691 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:27.100521 containerd[1465]: time="2025-02-13T19:18:27.100457789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:27.101563 containerd[1465]: time="2025-02-13T19:18:27.101517545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Feb 13 19:18:27.102397 containerd[1465]: time="2025-02-13T19:18:27.102358184Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:27.104580 containerd[1465]: time="2025-02-13T19:18:27.104533935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:27.105270 containerd[1465]: time="2025-02-13T19:18:27.105228110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.119201001s" Feb 13 19:18:27.105270 containerd[1465]: time="2025-02-13T19:18:27.105265176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Feb 13 19:18:27.107581 containerd[1465]: time="2025-02-13T19:18:27.107551687Z" level=info msg="CreateContainer within sandbox \"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:18:27.121923 containerd[1465]: time="2025-02-13T19:18:27.121779831Z" level=info msg="CreateContainer within sandbox \"31cca3f242a2a3ae6f69e07b446aefb438de3ea09a6286e2e05097047721ff55\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0b20a6508bb0773bc998cb368f116a7c60c25530b9afc0cedd97d4d64aa86b59\"" Feb 13 19:18:27.122852 containerd[1465]: time="2025-02-13T19:18:27.122818491Z" level=info msg="StartContainer for \"0b20a6508bb0773bc998cb368f116a7c60c25530b9afc0cedd97d4d64aa86b59\"" Feb 13 19:18:27.133837 systemd[1]: Created slice kubepods-besteffort-pod75ca3924_8b21_4fa9_b015_250328829e9b.slice - libcontainer container kubepods-besteffort-pod75ca3924_8b21_4fa9_b015_250328829e9b.slice. Feb 13 19:18:27.152453 systemd[1]: Started cri-containerd-0b20a6508bb0773bc998cb368f116a7c60c25530b9afc0cedd97d4d64aa86b59.scope - libcontainer container 0b20a6508bb0773bc998cb368f116a7c60c25530b9afc0cedd97d4d64aa86b59. Feb 13 19:18:27.187814 containerd[1465]: time="2025-02-13T19:18:27.187758791Z" level=info msg="StartContainer for \"0b20a6508bb0773bc998cb368f116a7c60c25530b9afc0cedd97d4d64aa86b59\" returns successfully" Feb 13 19:18:27.190907 kubelet[1771]: I0213 19:18:27.190820 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/75ca3924-8b21-4fa9-b015-250328829e9b-data\") pod \"nfs-server-provisioner-0\" (UID: \"75ca3924-8b21-4fa9-b015-250328829e9b\") " pod="default/nfs-server-provisioner-0" Feb 13 19:18:27.190907 kubelet[1771]: I0213 19:18:27.190865 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k67sx\" (UniqueName: \"kubernetes.io/projected/75ca3924-8b21-4fa9-b015-250328829e9b-kube-api-access-k67sx\") pod \"nfs-server-provisioner-0\" (UID: \"75ca3924-8b21-4fa9-b015-250328829e9b\") " pod="default/nfs-server-provisioner-0" Feb 13 19:18:27.438720 containerd[1465]: time="2025-02-13T19:18:27.438611398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75ca3924-8b21-4fa9-b015-250328829e9b,Namespace:default,Attempt:0,}" Feb 13 19:18:27.486821 kubelet[1771]: E0213 19:18:27.486754 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:27.622789 kubelet[1771]: I0213 19:18:27.622738 1771 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:18:27.622789 kubelet[1771]: I0213 19:18:27.622796 1771 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:18:27.683882 systemd-networkd[1396]: cali60e51b789ff: Link UP Feb 13 19:18:27.684535 systemd-networkd[1396]: cali60e51b789ff: Gained carrier Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.513 [INFO][3139] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 75ca3924-8b21-4fa9-b015-250328829e9b 1082 0 2025-02-13 19:18:27 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.114 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.513 [INFO][3139] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.537 [INFO][3153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" HandleID="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.551 [INFO][3153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" HandleID="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003047a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:18:27.537936533 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.551 [INFO][3153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.551 [INFO][3153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.551 [INFO][3153] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.555 [INFO][3153] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.651 [INFO][3153] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.655 [INFO][3153] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.657 [INFO][3153] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.659 [INFO][3153] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.659 [INFO][3153] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.661 [INFO][3153] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.671 [INFO][3153] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.679 [INFO][3153] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.131/26] block=192.168.101.128/26 handle="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.679 [INFO][3153] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.131/26] handle="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" host="10.0.0.114" Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.679 [INFO][3153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:18:27.723877 containerd[1465]: 2025-02-13 19:18:27.679 [INFO][3153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.131/26] IPv6=[] ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" HandleID="k8s-pod-network.15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Workload="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.724462 containerd[1465]: 2025-02-13 19:18:27.681 [INFO][3139] cni-plugin/k8s.go 386: Populated endpoint ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"75ca3924-8b21-4fa9-b015-250328829e9b", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:27.724462 containerd[1465]: 2025-02-13 19:18:27.681 [INFO][3139] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.131/32] ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.724462 containerd[1465]: 2025-02-13 19:18:27.681 [INFO][3139] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.724462 containerd[1465]: 2025-02-13 19:18:27.684 [INFO][3139] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.724598 containerd[1465]: 2025-02-13 19:18:27.685 [INFO][3139] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"75ca3924-8b21-4fa9-b015-250328829e9b", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:48:1b:42:d8:ae", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:27.724598 containerd[1465]: 2025-02-13 19:18:27.722 [INFO][3139] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.114-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:18:27.734723 kubelet[1771]: I0213 19:18:27.734668 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-czrzd" podStartSLOduration=14.640148788 podStartE2EDuration="19.734646498s" podCreationTimestamp="2025-02-13 19:18:08 +0000 UTC" firstStartedPulling="2025-02-13 19:18:22.011815013 +0000 UTC m=+14.389691968" lastFinishedPulling="2025-02-13 19:18:27.106312683 +0000 UTC m=+19.484189678" observedRunningTime="2025-02-13 19:18:27.734636131 +0000 UTC m=+20.112513166" watchObservedRunningTime="2025-02-13 19:18:27.734646498 +0000 UTC m=+20.112523493" Feb 13 19:18:27.742973 containerd[1465]: time="2025-02-13T19:18:27.741924007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:27.742973 containerd[1465]: time="2025-02-13T19:18:27.742805715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:27.742973 containerd[1465]: time="2025-02-13T19:18:27.742819405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:27.742973 containerd[1465]: time="2025-02-13T19:18:27.742914993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:27.764490 systemd[1]: Started cri-containerd-15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e.scope - libcontainer container 15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e. Feb 13 19:18:27.775721 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:27.853888 containerd[1465]: time="2025-02-13T19:18:27.853850245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:75ca3924-8b21-4fa9-b015-250328829e9b,Namespace:default,Attempt:0,} returns sandbox id \"15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e\"" Feb 13 19:18:27.855982 containerd[1465]: time="2025-02-13T19:18:27.855768253Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:18:28.471217 kubelet[1771]: E0213 19:18:28.471180 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:28.487705 kubelet[1771]: E0213 19:18:28.487671 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:29.488740 kubelet[1771]: E0213 19:18:29.488693 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:29.601471 systemd-networkd[1396]: cali60e51b789ff: Gained IPv6LL Feb 13 19:18:29.676434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860723085.mount: Deactivated successfully. Feb 13 19:18:30.490292 kubelet[1771]: E0213 19:18:30.488857 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:31.131398 containerd[1465]: time="2025-02-13T19:18:31.131341803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:31.132542 containerd[1465]: time="2025-02-13T19:18:31.132490404Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 19:18:31.133478 containerd[1465]: time="2025-02-13T19:18:31.133442543Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:31.136536 containerd[1465]: time="2025-02-13T19:18:31.136495301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:31.137810 containerd[1465]: time="2025-02-13T19:18:31.137655108Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.281839743s" Feb 13 19:18:31.137810 containerd[1465]: time="2025-02-13T19:18:31.137701052Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 19:18:31.140424 containerd[1465]: time="2025-02-13T19:18:31.140289847Z" level=info msg="CreateContainer within sandbox \"15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:18:31.149201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723375000.mount: Deactivated successfully. Feb 13 19:18:31.151444 containerd[1465]: time="2025-02-13T19:18:31.151397422Z" level=info msg="CreateContainer within sandbox \"15fa2c2c9cc60b36bd70829d91f1731aaf60e834f749091761e45ab28a31223e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0cac2f619286a4e9b4701768a50c23365712ec4cc114e5f3ead9b4e61df3fd2f\"" Feb 13 19:18:31.151943 containerd[1465]: time="2025-02-13T19:18:31.151913012Z" level=info msg="StartContainer for \"0cac2f619286a4e9b4701768a50c23365712ec4cc114e5f3ead9b4e61df3fd2f\"" Feb 13 19:18:31.200778 systemd[1]: Started cri-containerd-0cac2f619286a4e9b4701768a50c23365712ec4cc114e5f3ead9b4e61df3fd2f.scope - libcontainer container 0cac2f619286a4e9b4701768a50c23365712ec4cc114e5f3ead9b4e61df3fd2f. Feb 13 19:18:31.222847 containerd[1465]: time="2025-02-13T19:18:31.222778311Z" level=info msg="StartContainer for \"0cac2f619286a4e9b4701768a50c23365712ec4cc114e5f3ead9b4e61df3fd2f\" returns successfully" Feb 13 19:18:31.489946 kubelet[1771]: E0213 19:18:31.489806 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:32.490415 kubelet[1771]: E0213 19:18:32.490372 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:33.491292 kubelet[1771]: E0213 19:18:33.491229 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:34.491829 kubelet[1771]: E0213 19:18:34.491787 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:35.492176 kubelet[1771]: E0213 19:18:35.492135 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:36.493018 kubelet[1771]: E0213 19:18:36.492960 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:37.493862 kubelet[1771]: E0213 19:18:37.493797 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:38.494376 kubelet[1771]: E0213 19:18:38.494334 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:39.495167 kubelet[1771]: E0213 19:18:39.495113 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:40.495597 kubelet[1771]: E0213 19:18:40.495546 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:41.495777 kubelet[1771]: E0213 19:18:41.495677 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:41.560660 kubelet[1771]: I0213 19:18:41.559821 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.276282848 podStartE2EDuration="14.559799939s" podCreationTimestamp="2025-02-13 19:18:27 +0000 UTC" firstStartedPulling="2025-02-13 19:18:27.855224946 +0000 UTC m=+20.233101941" lastFinishedPulling="2025-02-13 19:18:31.138742037 +0000 UTC m=+23.516619032" observedRunningTime="2025-02-13 19:18:31.746750576 +0000 UTC m=+24.124627571" watchObservedRunningTime="2025-02-13 19:18:41.559799939 +0000 UTC m=+33.937676934" Feb 13 19:18:41.565910 systemd[1]: Created slice kubepods-besteffort-podc947d546_bc5b_4e30_bfa1_1cd2d5bc4600.slice - libcontainer container kubepods-besteffort-podc947d546_bc5b_4e30_bfa1_1cd2d5bc4600.slice. Feb 13 19:18:41.756759 kubelet[1771]: I0213 19:18:41.756725 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgb9h\" (UniqueName: \"kubernetes.io/projected/c947d546-bc5b-4e30-bfa1-1cd2d5bc4600-kube-api-access-bgb9h\") pod \"test-pod-1\" (UID: \"c947d546-bc5b-4e30-bfa1-1cd2d5bc4600\") " pod="default/test-pod-1" Feb 13 19:18:41.757130 kubelet[1771]: I0213 19:18:41.756927 1771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b43bd7da-a408-4c97-bdfb-8f9ffc58bf44\" (UniqueName: \"kubernetes.io/nfs/c947d546-bc5b-4e30-bfa1-1cd2d5bc4600-pvc-b43bd7da-a408-4c97-bdfb-8f9ffc58bf44\") pod \"test-pod-1\" (UID: \"c947d546-bc5b-4e30-bfa1-1cd2d5bc4600\") " pod="default/test-pod-1" Feb 13 19:18:41.880301 kernel: FS-Cache: Loaded Feb 13 19:18:41.904380 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:18:41.904470 kernel: RPC: Registered udp transport module. Feb 13 19:18:41.904485 kernel: RPC: Registered tcp transport module. Feb 13 19:18:41.905813 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:18:41.905841 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:18:42.065563 kernel: NFS: Registering the id_resolver key type Feb 13 19:18:42.065748 kernel: Key type id_resolver registered Feb 13 19:18:42.065767 kernel: Key type id_legacy registered Feb 13 19:18:42.096466 nfsidmap[3356]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:18:42.102644 nfsidmap[3357]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:18:42.170098 containerd[1465]: time="2025-02-13T19:18:42.170044755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c947d546-bc5b-4e30-bfa1-1cd2d5bc4600,Namespace:default,Attempt:0,}" Feb 13 19:18:42.295903 systemd-networkd[1396]: cali5ec59c6bf6e: Link UP Feb 13 19:18:42.296138 systemd-networkd[1396]: cali5ec59c6bf6e: Gained carrier Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.222 [INFO][3359] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.114-k8s-test--pod--1-eth0 default c947d546-bc5b-4e30-bfa1-1cd2d5bc4600 1151 0 2025-02-13 19:18:27 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.114 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.223 [INFO][3359] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.249 [INFO][3372] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" HandleID="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Workload="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.260 [INFO][3372] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" HandleID="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Workload="10.0.0.114-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c8a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.114", "pod":"test-pod-1", "timestamp":"2025-02-13 19:18:42.249642803 +0000 UTC"}, Hostname:"10.0.0.114", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.260 [INFO][3372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.260 [INFO][3372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.260 [INFO][3372] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.114' Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.263 [INFO][3372] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.267 [INFO][3372] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.273 [INFO][3372] ipam/ipam.go 489: Trying affinity for 192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.276 [INFO][3372] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.278 [INFO][3372] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.128/26 host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.278 [INFO][3372] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.128/26 handle="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.279 [INFO][3372] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2 Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.283 [INFO][3372] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.128/26 handle="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.290 [INFO][3372] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.132/26] block=192.168.101.128/26 handle="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.290 [INFO][3372] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.132/26] handle="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" host="10.0.0.114" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.290 [INFO][3372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.290 [INFO][3372] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.132/26] IPv6=[] ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" HandleID="k8s-pod-network.a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Workload="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.308610 containerd[1465]: 2025-02-13 19:18:42.292 [INFO][3359] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c947d546-bc5b-4e30-bfa1-1cd2d5bc4600", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:42.309481 containerd[1465]: 2025-02-13 19:18:42.292 [INFO][3359] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.132/32] ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.309481 containerd[1465]: 2025-02-13 19:18:42.292 [INFO][3359] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.309481 containerd[1465]: 2025-02-13 19:18:42.296 [INFO][3359] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.309481 containerd[1465]: 2025-02-13 19:18:42.297 [INFO][3359] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.114-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"c947d546-bc5b-4e30-bfa1-1cd2d5bc4600", ResourceVersion:"1151", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 18, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.114", ContainerID:"a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"7e:30:ec:5f:50:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:18:42.309481 containerd[1465]: 2025-02-13 19:18:42.307 [INFO][3359] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.114-k8s-test--pod--1-eth0" Feb 13 19:18:42.326505 containerd[1465]: time="2025-02-13T19:18:42.326340097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:18:42.326505 containerd[1465]: time="2025-02-13T19:18:42.326407877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:18:42.331114 containerd[1465]: time="2025-02-13T19:18:42.326433044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:42.331267 containerd[1465]: time="2025-02-13T19:18:42.331214817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:18:42.351466 systemd[1]: Started cri-containerd-a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2.scope - libcontainer container a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2. Feb 13 19:18:42.360865 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:18:42.376041 containerd[1465]: time="2025-02-13T19:18:42.375981826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:c947d546-bc5b-4e30-bfa1-1cd2d5bc4600,Namespace:default,Attempt:0,} returns sandbox id \"a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2\"" Feb 13 19:18:42.377007 containerd[1465]: time="2025-02-13T19:18:42.376985755Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:18:42.495910 kubelet[1771]: E0213 19:18:42.495858 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:42.692775 containerd[1465]: time="2025-02-13T19:18:42.692648561Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:18:42.692775 containerd[1465]: time="2025-02-13T19:18:42.692702137Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:18:42.695203 containerd[1465]: time="2025-02-13T19:18:42.695168645Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 318.153521ms" Feb 13 19:18:42.695253 containerd[1465]: time="2025-02-13T19:18:42.695202814Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 19:18:42.697496 containerd[1465]: time="2025-02-13T19:18:42.697458262Z" level=info msg="CreateContainer within sandbox \"a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:18:42.708996 containerd[1465]: time="2025-02-13T19:18:42.708947960Z" level=info msg="CreateContainer within sandbox \"a981feaeca58982ee432973b80fdc328d3140d50c2dfff7615741ea27f17fcd2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"aed9958c865ff04b6909596f5bb398278abd3e8d43d2a6123ac131c31cd9a9e5\"" Feb 13 19:18:42.709647 containerd[1465]: time="2025-02-13T19:18:42.709616672Z" level=info msg="StartContainer for \"aed9958c865ff04b6909596f5bb398278abd3e8d43d2a6123ac131c31cd9a9e5\"" Feb 13 19:18:42.737452 systemd[1]: Started cri-containerd-aed9958c865ff04b6909596f5bb398278abd3e8d43d2a6123ac131c31cd9a9e5.scope - libcontainer container aed9958c865ff04b6909596f5bb398278abd3e8d43d2a6123ac131c31cd9a9e5. Feb 13 19:18:42.757823 containerd[1465]: time="2025-02-13T19:18:42.757785058Z" level=info msg="StartContainer for \"aed9958c865ff04b6909596f5bb398278abd3e8d43d2a6123ac131c31cd9a9e5\" returns successfully" Feb 13 19:18:43.496440 kubelet[1771]: E0213 19:18:43.496388 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:43.617476 systemd-networkd[1396]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:18:43.763486 kubelet[1771]: I0213 19:18:43.763422 1771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.444230874 podStartE2EDuration="16.763403529s" podCreationTimestamp="2025-02-13 19:18:27 +0000 UTC" firstStartedPulling="2025-02-13 19:18:42.376688869 +0000 UTC m=+34.754565864" lastFinishedPulling="2025-02-13 19:18:42.695861524 +0000 UTC m=+35.073738519" observedRunningTime="2025-02-13 19:18:43.763263291 +0000 UTC m=+36.141140286" watchObservedRunningTime="2025-02-13 19:18:43.763403529 +0000 UTC m=+36.141280484" Feb 13 19:18:44.068265 update_engine[1450]: I20250213 19:18:44.068086 1450 update_attempter.cc:509] Updating boot flags... Feb 13 19:18:44.095341 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3353) Feb 13 19:18:44.121293 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3347) Feb 13 19:18:44.158295 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3347) Feb 13 19:18:44.497313 kubelet[1771]: E0213 19:18:44.497132 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:45.498031 kubelet[1771]: E0213 19:18:45.497975 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:46.498175 kubelet[1771]: E0213 19:18:46.498114 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:47.498590 kubelet[1771]: E0213 19:18:47.498524 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:48.471309 kubelet[1771]: E0213 19:18:48.471255 1771 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:48.498800 kubelet[1771]: E0213 19:18:48.498760 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:49.499165 kubelet[1771]: E0213 19:18:49.499105 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:18:50.500139 kubelet[1771]: E0213 19:18:50.500083 1771 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"