Feb 13 19:33:49.007838 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:33:49.007867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:49.007883 kernel: BIOS-provided physical RAM map: Feb 13 19:33:49.007892 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:33:49.007901 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:33:49.007910 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:33:49.007921 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:33:49.007930 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:33:49.007939 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:33:49.007949 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:33:49.007958 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:33:49.007971 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:33:49.007985 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:33:49.007995 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:33:49.008009 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:33:49.008019 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:33:49.008036 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:33:49.008045 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:33:49.008055 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:33:49.008065 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:33:49.008075 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:33:49.008085 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:33:49.008094 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:33:49.008125 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:33:49.008136 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:33:49.008145 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:33:49.008155 kernel: NX (Execute Disable) protection: active Feb 13 19:33:49.008172 kernel: APIC: Static calls initialized Feb 13 19:33:49.008182 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:33:49.008192 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:33:49.008202 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:33:49.008211 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:33:49.008221 kernel: extended physical RAM map: Feb 13 19:33:49.008230 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:33:49.008240 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:33:49.008250 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:33:49.008260 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:33:49.008269 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:33:49.008279 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:33:49.008294 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:33:49.008312 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:33:49.008322 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:33:49.008333 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:33:49.008343 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:33:49.008353 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:33:49.008375 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:33:49.008394 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:33:49.008415 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:33:49.008437 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:33:49.008458 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:33:49.008483 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:33:49.008506 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:33:49.008532 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:33:49.008554 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:33:49.008573 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:33:49.008583 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:33:49.008593 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:33:49.008603 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:33:49.008617 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:33:49.008636 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:33:49.008646 kernel: efi: EFI v2.7 by EDK II Feb 13 19:33:49.008656 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:33:49.008667 kernel: random: crng init done Feb 13 19:33:49.008690 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:33:49.008707 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:33:49.008721 kernel: secureboot: Secure boot disabled Feb 13 19:33:49.009354 kernel: SMBIOS 2.8 present. Feb 13 19:33:49.009366 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:33:49.009377 kernel: Hypervisor detected: KVM Feb 13 19:33:49.009387 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:33:49.009397 kernel: kvm-clock: using sched offset of 3933889513 cycles Feb 13 19:33:49.009409 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:33:49.009419 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:33:49.009430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:33:49.009442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:33:49.009453 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:33:49.009469 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:33:49.009484 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:33:49.009494 kernel: Using GB pages for direct mapping Feb 13 19:33:49.009504 kernel: ACPI: Early table checksum verification disabled Feb 13 19:33:49.009515 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:33:49.009525 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:33:49.009536 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009546 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009556 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:33:49.009570 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009581 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009591 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009601 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:49.009612 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:33:49.009634 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:33:49.009645 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:33:49.009655 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:33:49.009670 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:33:49.009680 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:33:49.009690 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:33:49.009699 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:33:49.009709 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:33:49.009719 kernel: No NUMA configuration found Feb 13 19:33:49.009729 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:33:49.009739 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:33:49.009749 kernel: Zone ranges: Feb 13 19:33:49.009758 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:33:49.009773 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:33:49.009782 kernel: Normal empty Feb 13 19:33:49.009796 kernel: Movable zone start for each node Feb 13 19:33:49.009806 kernel: Early memory node ranges Feb 13 19:33:49.009815 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:33:49.009826 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:33:49.009836 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:33:49.009846 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:33:49.009855 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:33:49.009870 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:33:49.009880 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:33:49.009890 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:33:49.009900 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:33:49.009910 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:33:49.009920 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:33:49.009943 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:33:49.009957 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:33:49.009968 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:33:49.009979 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:33:49.009989 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:33:49.010004 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:33:49.010019 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:33:49.010030 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:33:49.010040 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:33:49.010051 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:33:49.010061 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:33:49.010076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:33:49.010086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:33:49.010096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:33:49.010136 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:33:49.010147 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:33:49.010157 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:33:49.010168 kernel: TSC deadline timer available Feb 13 19:33:49.010178 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:33:49.010189 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:33:49.010204 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:33:49.010214 kernel: kvm-guest: setup PV sched yield Feb 13 19:33:49.010225 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:33:49.010236 kernel: Booting paravirtualized kernel on KVM Feb 13 19:33:49.010246 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:33:49.010257 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:33:49.010267 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:33:49.010278 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:33:49.010288 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:33:49.010302 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:33:49.010312 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:33:49.010325 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:49.010336 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:33:49.010346 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:33:49.010361 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:33:49.010372 kernel: Fallback order for Node 0: 0 Feb 13 19:33:49.010383 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:33:49.010397 kernel: Policy zone: DMA32 Feb 13 19:33:49.010408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:33:49.010419 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Feb 13 19:33:49.010437 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:33:49.010447 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:33:49.010466 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:33:49.010488 kernel: Dynamic Preempt: voluntary Feb 13 19:33:49.010503 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:33:49.010521 kernel: rcu: RCU event tracing is enabled. Feb 13 19:33:49.010542 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:33:49.010558 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:33:49.010569 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:33:49.010588 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:33:49.010637 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:33:49.010666 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:33:49.010678 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:33:49.010689 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:33:49.010699 kernel: Console: colour dummy device 80x25 Feb 13 19:33:49.010715 kernel: printk: console [ttyS0] enabled Feb 13 19:33:49.010726 kernel: ACPI: Core revision 20230628 Feb 13 19:33:49.010737 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:33:49.010753 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:33:49.010764 kernel: x2apic enabled Feb 13 19:33:49.010775 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:33:49.010790 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:33:49.010801 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:33:49.010812 kernel: kvm-guest: setup PV IPIs Feb 13 19:33:49.010826 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:33:49.010837 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:33:49.010848 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:33:49.010859 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:33:49.010869 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:33:49.010880 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:33:49.010891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:33:49.010902 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:33:49.010913 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:33:49.010930 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:33:49.010942 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:33:49.010955 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:33:49.010966 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:33:49.010977 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:33:49.010988 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:33:49.011000 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:33:49.011014 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:33:49.011030 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:33:49.011041 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:33:49.011051 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:33:49.011062 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:33:49.011073 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:33:49.011084 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:33:49.011094 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:33:49.011145 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:33:49.011157 kernel: landlock: Up and running. Feb 13 19:33:49.011174 kernel: SELinux: Initializing. Feb 13 19:33:49.011185 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:49.011197 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:49.011208 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:33:49.011218 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:49.011229 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:49.011240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:49.011251 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:33:49.011263 kernel: ... version: 0 Feb 13 19:33:49.011279 kernel: ... bit width: 48 Feb 13 19:33:49.011290 kernel: ... generic registers: 6 Feb 13 19:33:49.011301 kernel: ... value mask: 0000ffffffffffff Feb 13 19:33:49.011312 kernel: ... max period: 00007fffffffffff Feb 13 19:33:49.011323 kernel: ... fixed-purpose events: 0 Feb 13 19:33:49.011334 kernel: ... event mask: 000000000000003f Feb 13 19:33:49.011345 kernel: signal: max sigframe size: 1776 Feb 13 19:33:49.011356 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:33:49.011368 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:33:49.011384 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:33:49.011395 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:33:49.011406 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:33:49.011416 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:33:49.011427 kernel: smpboot: Max logical packages: 1 Feb 13 19:33:49.011437 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:33:49.011448 kernel: devtmpfs: initialized Feb 13 19:33:49.011458 kernel: x86/mm: Memory block size: 128MB Feb 13 19:33:49.011469 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:33:49.011480 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:33:49.011495 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:33:49.011506 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:33:49.011517 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:33:49.011546 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:33:49.011581 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:33:49.011603 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:33:49.011615 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:33:49.011636 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:33:49.011653 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:33:49.011670 kernel: audit: type=2000 audit(1739475228.675:1): state=initialized audit_enabled=0 res=1 Feb 13 19:33:49.011681 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:33:49.011692 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:33:49.011703 kernel: cpuidle: using governor menu Feb 13 19:33:49.011713 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:33:49.011724 kernel: dca service started, version 1.12.1 Feb 13 19:33:49.011735 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:33:49.011746 kernel: PCI: Using configuration type 1 for base access Feb 13 19:33:49.011763 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:33:49.011778 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:33:49.011789 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:33:49.011800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:33:49.011811 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:33:49.011822 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:33:49.011832 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:33:49.011848 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:33:49.011862 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:33:49.011878 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:33:49.011888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:33:49.011899 kernel: ACPI: Interpreter enabled Feb 13 19:33:49.011912 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:33:49.011921 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:33:49.011932 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:33:49.011943 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:33:49.011954 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:33:49.011965 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:33:49.012331 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:33:49.012502 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:33:49.012669 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:33:49.012684 kernel: PCI host bridge to bus 0000:00 Feb 13 19:33:49.012870 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:33:49.013039 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:33:49.013223 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:33:49.013379 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:33:49.013530 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:33:49.013683 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:33:49.013823 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:33:49.014052 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:33:49.014290 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:33:49.014461 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:33:49.014760 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:33:49.014943 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:33:49.015126 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:33:49.015365 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:33:49.015644 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:33:49.015829 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:33:49.016021 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:33:49.016217 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:33:49.016425 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:33:49.016594 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:33:49.016780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:33:49.016956 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:33:49.017207 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:33:49.017372 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:33:49.017541 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:33:49.017726 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:33:49.018060 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:33:49.018294 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:33:49.018482 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:33:49.018712 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:33:49.018890 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:33:49.019215 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:33:49.019415 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:33:49.019586 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:33:49.019602 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:33:49.019614 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:33:49.019643 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:33:49.019655 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:33:49.019666 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:33:49.019677 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:33:49.019688 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:33:49.019699 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:33:49.019710 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:33:49.019721 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:33:49.019733 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:33:49.019748 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:33:49.019758 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:33:49.019769 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:33:49.019780 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:33:49.019791 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:33:49.019802 kernel: iommu: Default domain type: Translated Feb 13 19:33:49.019812 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:33:49.019825 kernel: efivars: Registered efivars operations Feb 13 19:33:49.019839 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:33:49.019857 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:33:49.019872 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:33:49.019885 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:33:49.019898 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:33:49.019910 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:33:49.019928 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:33:49.019948 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:33:49.019966 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:33:49.019977 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:33:49.020192 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:33:49.020354 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:33:49.020519 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:33:49.020536 kernel: vgaarb: loaded Feb 13 19:33:49.020548 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:33:49.020560 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:33:49.020572 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:33:49.020583 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:33:49.020595 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:33:49.020613 kernel: pnp: PnP ACPI init Feb 13 19:33:49.020835 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:33:49.020853 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:33:49.020865 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:33:49.020877 kernel: NET: Registered PF_INET protocol family Feb 13 19:33:49.020914 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:33:49.020932 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:33:49.020947 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:33:49.020965 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:33:49.020980 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:33:49.020995 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:33:49.021010 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:49.021025 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:49.021039 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:33:49.021054 kernel: NET: Registered PF_XDP protocol family Feb 13 19:33:49.021254 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:33:49.021428 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:33:49.021583 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:33:49.021777 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:33:49.021945 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:33:49.022144 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:33:49.022307 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:33:49.022460 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:33:49.022476 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:33:49.022494 kernel: Initialise system trusted keyrings Feb 13 19:33:49.022505 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:33:49.022517 kernel: Key type asymmetric registered Feb 13 19:33:49.022527 kernel: Asymmetric key parser 'x509' registered Feb 13 19:33:49.022538 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:33:49.022549 kernel: io scheduler mq-deadline registered Feb 13 19:33:49.022560 kernel: io scheduler kyber registered Feb 13 19:33:49.022571 kernel: io scheduler bfq registered Feb 13 19:33:49.022581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:33:49.022595 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:33:49.022605 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:33:49.022618 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:33:49.022639 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:33:49.022650 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:33:49.022662 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:33:49.022676 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:33:49.022687 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:33:49.022699 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:33:49.022896 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:33:49.023063 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:33:49.023231 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:33:48 UTC (1739475228) Feb 13 19:33:49.023380 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:33:49.023396 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:33:49.023414 kernel: efifb: probing for efifb Feb 13 19:33:49.023426 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:33:49.023438 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:33:49.023450 kernel: efifb: scrolling: redraw Feb 13 19:33:49.023462 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:33:49.023474 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:33:49.023486 kernel: fb0: EFI VGA frame buffer device Feb 13 19:33:49.023501 kernel: pstore: Using crash dump compression: deflate Feb 13 19:33:49.023513 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:33:49.023529 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:33:49.023540 kernel: Segment Routing with IPv6 Feb 13 19:33:49.023552 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:33:49.023564 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:33:49.023576 kernel: Key type dns_resolver registered Feb 13 19:33:49.023588 kernel: IPI shorthand broadcast: enabled Feb 13 19:33:49.023600 kernel: sched_clock: Marking stable (1224004755, 162831981)->(1494498498, -107661762) Feb 13 19:33:49.023611 kernel: registered taskstats version 1 Feb 13 19:33:49.023631 kernel: Loading compiled-in X.509 certificates Feb 13 19:33:49.023649 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:33:49.023661 kernel: Key type .fscrypt registered Feb 13 19:33:49.023676 kernel: Key type fscrypt-provisioning registered Feb 13 19:33:49.023688 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:33:49.023700 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:33:49.023711 kernel: ima: No architecture policies found Feb 13 19:33:49.023723 kernel: clk: Disabling unused clocks Feb 13 19:33:49.023735 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:33:49.023747 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:33:49.023762 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:33:49.023774 kernel: Run /init as init process Feb 13 19:33:49.023785 kernel: with arguments: Feb 13 19:33:49.023797 kernel: /init Feb 13 19:33:49.023809 kernel: with environment: Feb 13 19:33:49.023820 kernel: HOME=/ Feb 13 19:33:49.023832 kernel: TERM=linux Feb 13 19:33:49.023844 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:33:49.023858 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:49.023876 systemd[1]: Detected virtualization kvm. Feb 13 19:33:49.023889 systemd[1]: Detected architecture x86-64. Feb 13 19:33:49.023903 systemd[1]: Running in initrd. Feb 13 19:33:49.023919 systemd[1]: No hostname configured, using default hostname. Feb 13 19:33:49.023934 systemd[1]: Hostname set to . Feb 13 19:33:49.023950 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:49.023965 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:33:49.023985 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:49.024001 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:49.024018 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:33:49.024034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:49.024050 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:33:49.024066 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:33:49.024085 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:33:49.024220 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:33:49.024234 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:49.024246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:49.024259 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:33:49.024272 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:49.024284 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:49.024297 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:33:49.024309 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:49.024327 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:49.024339 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:33:49.024355 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:33:49.024368 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:49.024380 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:49.024394 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:49.024405 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:33:49.024418 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:33:49.024429 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:49.024446 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:33:49.024458 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:33:49.024470 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:49.024482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:49.024493 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:49.024505 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:49.024516 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:49.024528 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:33:49.024578 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:33:49.024612 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:49.024633 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:49.024646 systemd-journald[194]: Journal started Feb 13 19:33:49.024671 systemd-journald[194]: Runtime Journal (/run/log/journal/bad5095d81c94dd2800405686b96ee46) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:33:49.009438 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:33:49.035260 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:49.035291 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:49.032850 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:49.037272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:49.039464 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:49.046271 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:33:49.048889 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:33:49.049535 kernel: Bridge firewalling registered Feb 13 19:33:49.051649 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:49.052690 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:49.057315 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:33:49.061278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:49.062078 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:49.069478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:49.077813 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:49.080242 dracut-cmdline[222]: dracut-dracut-053 Feb 13 19:33:49.083723 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:49.084311 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:49.126398 systemd-resolved[237]: Positive Trust Anchors: Feb 13 19:33:49.126418 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:49.126459 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:49.129860 systemd-resolved[237]: Defaulting to hostname 'linux'. Feb 13 19:33:49.131736 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:49.137665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:49.220170 kernel: SCSI subsystem initialized Feb 13 19:33:49.231169 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:33:49.245163 kernel: iscsi: registered transport (tcp) Feb 13 19:33:49.272591 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:33:49.272709 kernel: QLogic iSCSI HBA Driver Feb 13 19:33:49.338740 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:49.353379 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:33:49.383154 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:33:49.383236 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:33:49.385168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:33:49.431221 kernel: raid6: avx2x4 gen() 27736 MB/s Feb 13 19:33:49.448153 kernel: raid6: avx2x2 gen() 23683 MB/s Feb 13 19:33:49.465420 kernel: raid6: avx2x1 gen() 24874 MB/s Feb 13 19:33:49.465512 kernel: raid6: using algorithm avx2x4 gen() 27736 MB/s Feb 13 19:33:49.483538 kernel: raid6: .... xor() 4889 MB/s, rmw enabled Feb 13 19:33:49.483581 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:33:49.510146 kernel: xor: automatically using best checksumming function avx Feb 13 19:33:49.694152 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:33:49.711349 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:49.727413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:49.740967 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 19:33:49.746478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:49.750471 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:33:49.781267 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 19:33:49.820751 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:49.828398 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:49.900752 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:49.908297 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:33:49.927047 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:49.944416 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:49.971976 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:49.975364 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:33:49.975276 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:49.989664 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:33:49.989733 kernel: AES CTR mode by8 optimization enabled Feb 13 19:33:49.992459 kernel: libata version 3.00 loaded. Feb 13 19:33:49.995573 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:33:50.026150 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:33:50.042392 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:33:50.042568 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:33:50.097171 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:33:50.097194 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:33:50.097207 kernel: GPT:9289727 != 19775487 Feb 13 19:33:50.097218 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:33:50.097384 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:33:50.097396 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:33:50.097544 kernel: GPT:9289727 != 19775487 Feb 13 19:33:50.097556 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:33:50.097571 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:50.097582 kernel: scsi host0: ahci Feb 13 19:33:50.097759 kernel: scsi host1: ahci Feb 13 19:33:50.097914 kernel: scsi host2: ahci Feb 13 19:33:50.098069 kernel: scsi host3: ahci Feb 13 19:33:50.098248 kernel: scsi host4: ahci Feb 13 19:33:50.098435 kernel: scsi host5: ahci Feb 13 19:33:50.098607 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:33:50.098621 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:33:50.098633 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:33:50.098644 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:33:50.098655 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:33:50.098666 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:33:50.024804 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:50.106976 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Feb 13 19:33:50.107018 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (471) Feb 13 19:33:50.024965 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:50.027944 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:50.029481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:50.029661 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:50.031206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:50.043530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:50.054518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:50.063592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:50.063848 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:50.104241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:50.123911 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:33:50.127829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:50.141576 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:33:50.149183 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:50.155516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:33:50.158123 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:33:50.178369 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:33:50.190895 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:50.211590 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:50.380067 disk-uuid[569]: Primary Header is updated. Feb 13 19:33:50.380067 disk-uuid[569]: Secondary Entries is updated. Feb 13 19:33:50.380067 disk-uuid[569]: Secondary Header is updated. Feb 13 19:33:50.385134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:50.390133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:50.409578 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:50.409696 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:50.409708 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:50.411144 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:50.411228 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:50.412175 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:33:50.414081 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:33:50.414131 kernel: ata3.00: applying bridge limits Feb 13 19:33:50.418151 kernel: ata3.00: configured for UDMA/100 Feb 13 19:33:50.421191 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:33:50.467198 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:33:50.479210 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:33:50.479235 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:33:51.417937 disk-uuid[580]: The operation has completed successfully. Feb 13 19:33:51.419422 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:51.454018 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:33:51.454174 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:33:51.479253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:33:51.497759 sh[595]: Success Feb 13 19:33:51.511134 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:33:51.548762 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:33:51.562897 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:33:51.567739 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:33:51.580058 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:33:51.580114 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:51.580136 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:33:51.581290 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:33:51.582185 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:33:51.588910 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:33:51.590654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:33:51.602348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:33:51.604410 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:33:51.629462 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:51.629532 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:51.629549 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:51.633145 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:51.644817 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:33:51.647081 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:51.724530 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:51.778607 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:51.849782 systemd-networkd[773]: lo: Link UP Feb 13 19:33:51.849794 systemd-networkd[773]: lo: Gained carrier Feb 13 19:33:51.851484 systemd-networkd[773]: Enumeration completed Feb 13 19:33:51.851693 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:51.851920 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:51.851924 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:51.891901 systemd-networkd[773]: eth0: Link UP Feb 13 19:33:51.891906 systemd-networkd[773]: eth0: Gained carrier Feb 13 19:33:51.891918 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:51.893207 systemd[1]: Reached target network.target - Network. Feb 13 19:33:51.909180 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:52.172464 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:33:52.207307 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:33:52.295835 ignition[778]: Ignition 2.20.0 Feb 13 19:33:52.295848 ignition[778]: Stage: fetch-offline Feb 13 19:33:52.295892 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:52.295903 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:52.296011 ignition[778]: parsed url from cmdline: "" Feb 13 19:33:52.296016 ignition[778]: no config URL provided Feb 13 19:33:52.296021 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:33:52.296031 ignition[778]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:33:52.296063 ignition[778]: op(1): [started] loading QEMU firmware config module Feb 13 19:33:52.296069 ignition[778]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:33:52.308329 ignition[778]: op(1): [finished] loading QEMU firmware config module Feb 13 19:33:52.309860 ignition[778]: parsing config with SHA512: 7233f2c0e013f614d67b315464482821fd13126383e3bfa7301a996525340207bc2b9addee7e997bdcde4f251cf0fb1b04f7ce8d67984b17260993b5fc0edbf4 Feb 13 19:33:52.314497 unknown[778]: fetched base config from "system" Feb 13 19:33:52.314513 unknown[778]: fetched user config from "qemu" Feb 13 19:33:52.314902 ignition[778]: fetch-offline: fetch-offline passed Feb 13 19:33:52.314992 ignition[778]: Ignition finished successfully Feb 13 19:33:52.317829 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:52.320030 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:33:52.328332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:33:52.349834 ignition[788]: Ignition 2.20.0 Feb 13 19:33:52.349847 ignition[788]: Stage: kargs Feb 13 19:33:52.350061 ignition[788]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:52.350077 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:52.370662 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:33:52.350839 ignition[788]: kargs: kargs passed Feb 13 19:33:52.350902 ignition[788]: Ignition finished successfully Feb 13 19:33:52.377258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:33:52.397085 ignition[798]: Ignition 2.20.0 Feb 13 19:33:52.397096 ignition[798]: Stage: disks Feb 13 19:33:52.397278 ignition[798]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:52.397290 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:52.400297 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:33:52.397923 ignition[798]: disks: disks passed Feb 13 19:33:52.401855 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:52.397970 ignition[798]: Ignition finished successfully Feb 13 19:33:52.403728 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:33:52.405579 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:52.408085 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:33:52.409244 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:33:52.417635 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:33:52.432797 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:33:52.656262 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:33:52.671195 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:33:52.829149 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:33:52.829648 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:33:52.831141 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:33:52.846217 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:52.848333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:33:52.897838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:33:52.903979 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Feb 13 19:33:52.897889 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:33:52.897913 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:52.911157 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:52.911179 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:52.911190 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:52.901452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:33:52.912839 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:52.905655 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:33:52.914392 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:52.946895 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:33:52.959069 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:33:52.962929 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:33:52.966987 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:33:53.057333 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:53.068207 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:33:53.102353 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:33:53.109437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:33:53.110516 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:53.145233 systemd-networkd[773]: eth0: Gained IPv6LL Feb 13 19:33:53.179651 ignition[933]: INFO : Ignition 2.20.0 Feb 13 19:33:53.179651 ignition[933]: INFO : Stage: mount Feb 13 19:33:53.179651 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:53.179651 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:53.179651 ignition[933]: INFO : mount: mount passed Feb 13 19:33:53.179651 ignition[933]: INFO : Ignition finished successfully Feb 13 19:33:53.181242 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:33:53.189321 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:33:53.192507 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:33:53.839269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:53.864130 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 19:33:53.864166 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:53.866637 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:53.866655 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:53.869126 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:53.871139 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:53.905135 ignition[962]: INFO : Ignition 2.20.0 Feb 13 19:33:53.905135 ignition[962]: INFO : Stage: files Feb 13 19:33:53.905135 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:53.905135 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:53.910410 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:33:53.910410 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:33:53.910410 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:33:53.910410 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:33:53.910410 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:33:53.910410 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:33:53.909718 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:53.921221 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:33:54.302130 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:33:54.941484 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:54.941484 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:33:54.945622 ignition[962]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:54.945622 ignition[962]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:54.945622 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:33:54.945622 ignition[962]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:54.968694 ignition[962]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:54.975885 ignition[962]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:54.977670 ignition[962]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:54.979310 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:54.981176 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:54.982857 ignition[962]: INFO : files: files passed Feb 13 19:33:54.983617 ignition[962]: INFO : Ignition finished successfully Feb 13 19:33:54.986749 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:33:54.998405 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:33:55.001660 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:33:55.004680 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:33:55.005818 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:33:55.015476 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:33:55.022196 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:55.022196 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:55.027892 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:55.032666 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:55.034743 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:33:55.044269 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:33:55.074019 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:33:55.074190 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:33:55.076587 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:33:55.078640 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:33:55.080700 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:33:55.092353 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:33:55.109260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:55.120415 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:33:55.139575 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:55.141244 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:55.143741 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:33:55.146169 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:33:55.146406 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:55.148729 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:33:55.150762 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:33:55.153093 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:33:55.155402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:55.157760 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:55.160295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:33:55.162660 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:55.165287 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:33:55.167660 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:33:55.170232 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:33:55.172238 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:33:55.172427 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:55.174799 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:55.176715 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:55.179373 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:33:55.179658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:55.181928 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:33:55.182201 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:55.184921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:33:55.185298 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:55.187220 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:33:55.188913 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:33:55.192330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:55.194073 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:33:55.196203 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:33:55.198585 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:33:55.198710 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:55.200524 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:33:55.200626 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:55.202732 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:33:55.202858 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:55.205779 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:33:55.205900 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:33:55.215354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:33:55.217713 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:33:55.219496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:33:55.219686 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:55.221998 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:33:55.222227 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:55.230355 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:33:55.230528 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:33:55.234877 ignition[1016]: INFO : Ignition 2.20.0 Feb 13 19:33:55.234877 ignition[1016]: INFO : Stage: umount Feb 13 19:33:55.234877 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:55.234877 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:55.234877 ignition[1016]: INFO : umount: umount passed Feb 13 19:33:55.234877 ignition[1016]: INFO : Ignition finished successfully Feb 13 19:33:55.236563 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:33:55.236704 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:33:55.238314 systemd[1]: Stopped target network.target - Network. Feb 13 19:33:55.240380 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:33:55.240443 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:33:55.242663 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:33:55.242717 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:33:55.244668 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:33:55.244720 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:33:55.246691 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:33:55.246747 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:55.248944 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:33:55.250963 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:33:55.254389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:33:55.255223 systemd-networkd[773]: eth0: DHCPv6 lease lost Feb 13 19:33:55.258477 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:33:55.258657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:33:55.261507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:33:55.261561 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:55.273426 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:33:55.274610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:33:55.274686 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:55.277085 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:55.280027 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:33:55.280215 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:33:55.288748 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:33:55.289574 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:55.292378 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:33:55.292433 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:55.294605 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:33:55.294663 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:55.297637 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:33:55.297862 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:55.300658 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:33:55.300831 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:33:55.304014 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:33:55.304087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:55.306087 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:33:55.306157 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:55.308416 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:33:55.308497 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:55.310891 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:33:55.310952 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:55.312633 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:55.312688 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:55.335461 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:33:55.336684 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:33:55.336764 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:55.339285 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:33:55.339346 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:55.341663 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:33:55.341718 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:55.344197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:55.344254 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:55.347031 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:33:55.347165 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:33:55.432332 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:33:55.432512 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:33:55.434714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:33:55.436352 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:33:55.436415 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:55.448525 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:33:55.458296 systemd[1]: Switching root. Feb 13 19:33:55.489135 systemd-journald[194]: Journal stopped Feb 13 19:33:56.971401 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:33:56.971484 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:33:56.971505 kernel: SELinux: policy capability open_perms=1 Feb 13 19:33:56.971517 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:33:56.971529 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:33:56.971541 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:33:56.971559 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:33:56.971575 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:33:56.971586 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:33:56.971599 kernel: audit: type=1403 audit(1739475235.918:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:33:56.971612 systemd[1]: Successfully loaded SELinux policy in 46.604ms. Feb 13 19:33:56.971639 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.594ms. Feb 13 19:33:56.971652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:56.971665 systemd[1]: Detected virtualization kvm. Feb 13 19:33:56.971681 systemd[1]: Detected architecture x86-64. Feb 13 19:33:56.971696 systemd[1]: Detected first boot. Feb 13 19:33:56.971708 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:56.971721 zram_generator::config[1061]: No configuration found. Feb 13 19:33:56.971736 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:33:56.971749 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:33:56.971761 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:33:56.971777 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:33:56.971790 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:33:56.971803 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:33:56.971816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:33:56.971828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:33:56.971841 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:33:56.971853 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:33:56.971866 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:33:56.971883 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:33:56.971897 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:56.971910 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:56.971922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:33:56.971936 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:33:56.971960 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:33:56.971974 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:56.971989 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:33:56.972001 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:56.972021 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:33:56.972034 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:33:56.972047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:33:56.972060 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:33:56.972072 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:56.972085 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:56.972110 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:56.972124 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:56.972139 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:33:56.972152 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:33:56.972164 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:56.972177 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:56.972189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:56.972202 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:33:56.972215 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:33:56.972228 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:33:56.972241 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:33:56.972256 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:56.972269 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:33:56.972283 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:33:56.972296 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:33:56.972309 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:33:56.972321 systemd[1]: Reached target machines.target - Containers. Feb 13 19:33:56.972341 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:33:56.972354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:56.972369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:56.972382 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:33:56.972395 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:56.972407 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:56.972420 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:56.972432 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:33:56.972444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:56.972465 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:33:56.972478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:33:56.972494 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:33:56.972506 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:33:56.972519 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:33:56.972531 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:56.972544 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:56.972557 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:33:56.972570 kernel: loop: module loaded Feb 13 19:33:56.972583 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:33:56.972595 kernel: fuse: init (API version 7.39) Feb 13 19:33:56.972609 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:56.972622 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:33:56.972634 systemd[1]: Stopped verity-setup.service. Feb 13 19:33:56.972666 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 19:33:56.972690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:56.972703 systemd-journald[1124]: Journal started Feb 13 19:33:56.972729 systemd-journald[1124]: Runtime Journal (/run/log/journal/bad5095d81c94dd2800405686b96ee46) is 6.0M, max 48.2M, 42.2M free. Feb 13 19:33:56.581387 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:33:56.602830 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:33:56.603441 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:33:56.603932 systemd[1]: systemd-journald.service: Consumed 1.000s CPU time. Feb 13 19:33:57.011141 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:57.048255 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:33:57.049736 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:33:57.051202 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:33:57.052497 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:33:57.053970 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:33:57.055427 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:33:57.056983 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:57.061134 kernel: ACPI: bus type drm_connector registered Feb 13 19:33:57.067782 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:33:57.067996 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:33:57.092288 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:57.092501 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:57.093986 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:57.094182 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:57.095570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:57.095747 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:57.097339 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:33:57.097537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:33:57.099007 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:57.099391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:57.100816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:57.102385 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:33:57.104120 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:33:57.121513 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:33:57.172411 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:33:57.179254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:33:57.180804 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:33:57.180859 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:57.184250 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:33:57.188099 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:33:57.196274 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:33:57.197852 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:57.200618 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:33:57.203370 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:33:57.204866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:57.207223 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:33:57.212003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:57.218335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:57.222281 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:33:57.306520 systemd-journald[1124]: Time spent on flushing to /var/log/journal/bad5095d81c94dd2800405686b96ee46 is 23.765ms for 1026 entries. Feb 13 19:33:57.306520 systemd-journald[1124]: System Journal (/var/log/journal/bad5095d81c94dd2800405686b96ee46) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:33:57.608948 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 19:33:57.609029 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:33:57.609072 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:33:57.609096 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:33:57.609142 kernel: loop2: detected capacity change from 0 to 141000 Feb 13 19:33:57.300677 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:57.309769 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:33:57.311851 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:33:57.314133 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:33:57.322264 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:33:57.326881 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:33:57.348988 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:33:57.374670 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:33:57.380371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:57.383419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:57.400771 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:33:57.431885 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:33:57.497017 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:33:57.497037 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 19:33:57.506092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:57.556622 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:33:57.598032 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:33:57.611394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:57.614342 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:33:57.648430 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:33:57.648460 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Feb 13 19:33:57.654803 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:57.730164 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:33:57.827363 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:33:57.967710 kernel: loop5: detected capacity change from 0 to 141000 Feb 13 19:33:58.058962 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:33:58.059865 (sd-merge)[1202]: Merged extensions into '/usr'. Feb 13 19:33:58.064698 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:33:58.065072 systemd[1]: Reloading... Feb 13 19:33:58.286181 zram_generator::config[1232]: No configuration found. Feb 13 19:33:58.339930 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:33:58.437959 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:58.501921 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:33:58.502766 systemd[1]: Reloading finished in 437 ms. Feb 13 19:33:58.550452 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:33:58.552157 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:33:58.553853 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:33:58.571525 systemd[1]: Starting ensure-sysext.service... Feb 13 19:33:58.574031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:58.598293 systemd[1]: Reloading requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:33:58.598328 systemd[1]: Reloading... Feb 13 19:33:58.644040 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:33:58.644536 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:33:58.645852 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:33:58.648335 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Feb 13 19:33:58.648552 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Feb 13 19:33:58.662134 zram_generator::config[1294]: No configuration found. Feb 13 19:33:58.775992 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:58.776008 systemd-tmpfiles[1268]: Skipping /boot Feb 13 19:33:58.789840 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:58.789857 systemd-tmpfiles[1268]: Skipping /boot Feb 13 19:33:58.855883 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:58.910789 systemd[1]: Reloading finished in 311 ms. Feb 13 19:33:58.935840 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:58.944044 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:58.947613 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:33:58.951371 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:33:58.958659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:58.961991 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:33:58.970328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:58.970535 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:58.981641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:58.986482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:58.990419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:58.991768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:58.997429 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:33:58.999064 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:59.001628 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:33:59.003854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:59.004139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:59.015138 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:33:59.019037 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:59.019391 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:59.022722 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:59.023024 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:59.024426 augenrules[1362]: No rules Feb 13 19:33:59.025691 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:59.026014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:59.034904 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:59.035348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:59.043553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:59.047057 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:59.055192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:59.057000 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:59.074504 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:59.079195 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:33:59.080718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:59.082581 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:33:59.085134 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:33:59.087701 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:33:59.090307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:59.090705 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:59.093808 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:59.094150 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:59.096515 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:59.096876 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:59.099187 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:33:59.113508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:59.113739 systemd-udevd[1378]: Using default interface naming scheme 'v255'. Feb 13 19:33:59.126463 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:59.127970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:59.131699 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:59.141525 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:59.144631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:59.151118 systemd-resolved[1336]: Positive Trust Anchors: Feb 13 19:33:59.151134 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:59.151188 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:59.154984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:59.157686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:59.157886 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:33:59.158185 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:59.159269 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:59.163280 augenrules[1387]: /sbin/augenrules: No change Feb 13 19:33:59.166057 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:59.166326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:59.173628 systemd-resolved[1336]: Defaulting to hostname 'linux'. Feb 13 19:33:59.174856 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:59.175366 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:59.178712 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:59.178890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:59.184014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:59.188500 augenrules[1427]: No rules Feb 13 19:33:59.189429 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:59.190222 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:59.192825 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:59.193156 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:59.204330 systemd[1]: Finished ensure-sysext.service. Feb 13 19:33:59.212006 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:33:59.215415 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:59.254866 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:59.256424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:59.256513 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:59.267390 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:33:59.338144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1409) Feb 13 19:33:59.358140 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:33:59.360569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:59.365149 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:33:59.369369 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:33:59.377545 systemd-networkd[1440]: lo: Link UP Feb 13 19:33:59.384896 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:33:59.387343 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:33:59.404274 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:33:59.404573 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:33:59.377561 systemd-networkd[1440]: lo: Gained carrier Feb 13 19:33:59.391447 systemd-networkd[1440]: Enumeration completed Feb 13 19:33:59.391591 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:59.395189 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:59.395195 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:59.398289 systemd-networkd[1440]: eth0: Link UP Feb 13 19:33:59.398295 systemd-networkd[1440]: eth0: Gained carrier Feb 13 19:33:59.398319 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:59.411476 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:33:59.414561 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:59.416014 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:33:59.420297 systemd[1]: Reached target network.target - Network. Feb 13 19:33:59.424124 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:33:59.425299 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:33:59.426086 systemd-timesyncd[1444]: Network configuration changed, trying to establish connection. Feb 13 19:34:00.804216 systemd-timesyncd[1444]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:34:00.804309 systemd-timesyncd[1444]: Initial clock synchronization to Thu 2025-02-13 19:34:00.804022 UTC. Feb 13 19:34:00.804465 systemd-resolved[1336]: Clock change detected. Flushing caches. Feb 13 19:34:00.814213 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:34:00.832518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:34:00.887086 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:34:00.899550 kernel: kvm_amd: TSC scaling supported Feb 13 19:34:00.899670 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:34:00.899688 kernel: kvm_amd: Nested Paging enabled Feb 13 19:34:00.899994 kernel: kvm_amd: LBR virtualization supported Feb 13 19:34:00.901244 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:34:00.901403 kernel: kvm_amd: Virtual GIF supported Feb 13 19:34:00.927000 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:34:00.967557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:34:00.988862 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:34:01.001305 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:34:01.017064 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:34:01.053123 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:34:01.055500 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:34:01.056796 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:34:01.058275 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:34:01.059664 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:34:01.061333 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:34:01.062701 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:34:01.064091 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:34:01.065477 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:34:01.065542 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:34:01.066601 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:34:01.069848 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:34:01.073751 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:34:01.087304 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:34:01.090227 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:34:01.092182 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:34:01.093561 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:34:01.094680 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:34:01.095824 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:34:01.095871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:34:01.097866 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:34:01.100765 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:34:01.104412 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:34:01.110479 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:34:01.112800 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:34:01.112897 lvm[1471]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:34:01.116622 jq[1474]: false Feb 13 19:34:01.117185 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:34:01.122215 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:34:01.130269 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:34:01.135000 extend-filesystems[1475]: Found loop3 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found loop4 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found loop5 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found sr0 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found vda Feb 13 19:34:01.136275 extend-filesystems[1475]: Found vda1 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found vda2 Feb 13 19:34:01.136275 extend-filesystems[1475]: Found vda3 Feb 13 19:34:01.141207 extend-filesystems[1475]: Found usr Feb 13 19:34:01.141207 extend-filesystems[1475]: Found vda4 Feb 13 19:34:01.141207 extend-filesystems[1475]: Found vda6 Feb 13 19:34:01.141207 extend-filesystems[1475]: Found vda7 Feb 13 19:34:01.141207 extend-filesystems[1475]: Found vda9 Feb 13 19:34:01.141207 extend-filesystems[1475]: Checking size of /dev/vda9 Feb 13 19:34:01.140453 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:34:01.143888 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:34:01.144531 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:34:01.145892 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:34:01.160179 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:34:01.163575 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:34:01.174326 dbus-daemon[1473]: [system] SELinux support is enabled Feb 13 19:34:01.176335 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:34:01.201395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:34:01.207111 jq[1491]: true Feb 13 19:34:01.201683 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:34:01.202144 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:34:01.202398 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:34:01.204100 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:34:01.204328 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:34:01.207802 extend-filesystems[1475]: Resized partition /dev/vda9 Feb 13 19:34:01.213293 extend-filesystems[1499]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:34:01.221065 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:34:01.221123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:34:01.228480 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:34:01.228508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:34:01.232024 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1409) Feb 13 19:34:01.231450 (ntainerd)[1496]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:34:01.236755 update_engine[1486]: I20250213 19:34:01.236636 1486 main.cc:92] Flatcar Update Engine starting Feb 13 19:34:01.239621 jq[1497]: true Feb 13 19:34:01.242087 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:34:01.243930 update_engine[1486]: I20250213 19:34:01.243865 1486 update_check_scheduler.cc:74] Next update check in 9m38s Feb 13 19:34:01.246001 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:34:01.254353 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:34:01.260656 systemd-logind[1483]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:34:01.260702 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:34:01.262890 systemd-logind[1483]: New seat seat0. Feb 13 19:34:01.265292 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:34:01.333340 locksmithd[1509]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:34:01.372603 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:34:01.400377 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:34:01.411313 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:34:01.420556 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:34:01.420879 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:34:01.424406 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:34:01.451663 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:34:01.462562 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:34:01.465263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:34:01.466589 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:34:01.540021 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:34:01.610922 extend-filesystems[1499]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:34:01.610922 extend-filesystems[1499]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:34:01.610922 extend-filesystems[1499]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:34:01.616301 extend-filesystems[1475]: Resized filesystem in /dev/vda9 Feb 13 19:34:01.617586 bash[1523]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:34:01.619808 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:34:01.620172 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:34:01.622340 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:34:01.626295 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:34:01.969645 containerd[1496]: time="2025-02-13T19:34:01.969384092Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:34:02.007630 containerd[1496]: time="2025-02-13T19:34:02.007450360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.011139 containerd[1496]: time="2025-02-13T19:34:02.011076831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:02.011191 containerd[1496]: time="2025-02-13T19:34:02.011135581Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:34:02.011191 containerd[1496]: time="2025-02-13T19:34:02.011167401Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:34:02.011671 containerd[1496]: time="2025-02-13T19:34:02.011601355Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:34:02.011671 containerd[1496]: time="2025-02-13T19:34:02.011645277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012446 containerd[1496]: time="2025-02-13T19:34:02.012388671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012446 containerd[1496]: time="2025-02-13T19:34:02.012431321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012855 containerd[1496]: time="2025-02-13T19:34:02.012806976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012855 containerd[1496]: time="2025-02-13T19:34:02.012834177Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012855 containerd[1496]: time="2025-02-13T19:34:02.012851940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:02.012942 containerd[1496]: time="2025-02-13T19:34:02.012864554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.013048 containerd[1496]: time="2025-02-13T19:34:02.013013022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.013422 containerd[1496]: time="2025-02-13T19:34:02.013368679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:34:02.013656 containerd[1496]: time="2025-02-13T19:34:02.013612827Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:34:02.013656 containerd[1496]: time="2025-02-13T19:34:02.013641020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:34:02.013854 containerd[1496]: time="2025-02-13T19:34:02.013818122Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:34:02.013996 containerd[1496]: time="2025-02-13T19:34:02.013938007Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:34:02.021207 containerd[1496]: time="2025-02-13T19:34:02.021134232Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:34:02.021318 containerd[1496]: time="2025-02-13T19:34:02.021270368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:34:02.021318 containerd[1496]: time="2025-02-13T19:34:02.021311575Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:34:02.021382 containerd[1496]: time="2025-02-13T19:34:02.021340609Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:34:02.021423 containerd[1496]: time="2025-02-13T19:34:02.021382067Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:34:02.022719 containerd[1496]: time="2025-02-13T19:34:02.022682195Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:34:02.024060 containerd[1496]: time="2025-02-13T19:34:02.023998394Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:34:02.024574 containerd[1496]: time="2025-02-13T19:34:02.024520903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:34:02.024574 containerd[1496]: time="2025-02-13T19:34:02.024558384Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:34:02.024574 containerd[1496]: time="2025-02-13T19:34:02.024580395Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:34:02.024574 containerd[1496]: time="2025-02-13T19:34:02.024595964Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024610582Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024624848Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024650657Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024669953Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024683498Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024699769Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024712242Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024743992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024758058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024771263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024787313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.024817 containerd[1496]: time="2025-02-13T19:34:02.024827238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024843058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024866722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024882702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024905164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024926183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024938487Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024953785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.024988170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025048904Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025074612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025088808Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025099969Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025160944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:34:02.025175 containerd[1496]: time="2025-02-13T19:34:02.025180130Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025191491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025204195Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025214003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025228781Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025252485Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:34:02.025451 containerd[1496]: time="2025-02-13T19:34:02.025267704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:34:02.025626 containerd[1496]: time="2025-02-13T19:34:02.025578577Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:34:02.025626 containerd[1496]: time="2025-02-13T19:34:02.025632278Z" level=info msg="Connect containerd service" Feb 13 19:34:02.025799 containerd[1496]: time="2025-02-13T19:34:02.025681470Z" level=info msg="using legacy CRI server" Feb 13 19:34:02.025799 containerd[1496]: time="2025-02-13T19:34:02.025693242Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:34:02.025871 containerd[1496]: time="2025-02-13T19:34:02.025850938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:34:02.026782 containerd[1496]: time="2025-02-13T19:34:02.026744784Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:34:02.026990 containerd[1496]: time="2025-02-13T19:34:02.026918971Z" level=info msg="Start subscribing containerd event" Feb 13 19:34:02.027016 containerd[1496]: time="2025-02-13T19:34:02.027004892Z" level=info msg="Start recovering state" Feb 13 19:34:02.027107 containerd[1496]: time="2025-02-13T19:34:02.027085643Z" level=info msg="Start event monitor" Feb 13 19:34:02.027168 containerd[1496]: time="2025-02-13T19:34:02.027117673Z" level=info msg="Start snapshots syncer" Feb 13 19:34:02.027168 containerd[1496]: time="2025-02-13T19:34:02.027131008Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:34:02.027168 containerd[1496]: time="2025-02-13T19:34:02.027140767Z" level=info msg="Start streaming server" Feb 13 19:34:02.027279 containerd[1496]: time="2025-02-13T19:34:02.027221518Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:34:02.027333 containerd[1496]: time="2025-02-13T19:34:02.027310826Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:34:02.027422 containerd[1496]: time="2025-02-13T19:34:02.027394873Z" level=info msg="containerd successfully booted in 0.061690s" Feb 13 19:34:02.027486 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:34:02.839333 systemd-networkd[1440]: eth0: Gained IPv6LL Feb 13 19:34:02.844286 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:34:02.846615 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:34:02.860242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:34:02.863273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:02.865745 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:34:02.892502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:34:02.895537 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:34:02.895769 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:34:02.899698 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:34:04.333694 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:04.336409 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:34:04.339252 systemd[1]: Startup finished in 1.401s (kernel) + 7.141s (initrd) + 7.092s (userspace) = 15.634s. Feb 13 19:34:04.343306 (kubelet)[1578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:34:04.409931 agetty[1542]: failed to open credentials directory Feb 13 19:34:04.414336 agetty[1543]: failed to open credentials directory Feb 13 19:34:04.520353 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:34:04.521846 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:43692.service - OpenSSH per-connection server daemon (10.0.0.1:43692). Feb 13 19:34:04.584298 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 43692 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:04.587010 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:04.600509 systemd-logind[1483]: New session 1 of user core. Feb 13 19:34:04.601883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:34:04.613370 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:34:04.633891 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:34:04.643644 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:34:04.652615 (systemd)[1593]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:34:04.799895 systemd[1593]: Queued start job for default target default.target. Feb 13 19:34:04.813457 systemd[1593]: Created slice app.slice - User Application Slice. Feb 13 19:34:04.813487 systemd[1593]: Reached target paths.target - Paths. Feb 13 19:34:04.813502 systemd[1593]: Reached target timers.target - Timers. Feb 13 19:34:04.815230 systemd[1593]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:34:04.828989 systemd[1593]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:34:04.829138 systemd[1593]: Reached target sockets.target - Sockets. Feb 13 19:34:04.829157 systemd[1593]: Reached target basic.target - Basic System. Feb 13 19:34:04.829214 systemd[1593]: Reached target default.target - Main User Target. Feb 13 19:34:04.829262 systemd[1593]: Startup finished in 163ms. Feb 13 19:34:04.829794 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:34:04.831736 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:34:04.937169 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:43706.service - OpenSSH per-connection server daemon (10.0.0.1:43706). Feb 13 19:34:04.999307 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 43706 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.000607 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.006064 systemd-logind[1483]: New session 2 of user core. Feb 13 19:34:05.019191 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:34:05.075213 kubelet[1578]: E0213 19:34:05.075122 1578 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:34:05.076811 sshd[1608]: Connection closed by 10.0.0.1 port 43706 Feb 13 19:34:05.077226 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:05.096929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:34:05.097201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:34:05.097535 systemd[1]: kubelet.service: Consumed 2.049s CPU time. Feb 13 19:34:05.098112 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:43706.service: Deactivated successfully. Feb 13 19:34:05.099927 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:34:05.101525 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:34:05.115350 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:43718.service - OpenSSH per-connection server daemon (10.0.0.1:43718). Feb 13 19:34:05.116158 systemd-logind[1483]: Removed session 2. Feb 13 19:34:05.152991 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 43718 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.154542 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.158958 systemd-logind[1483]: New session 3 of user core. Feb 13 19:34:05.173127 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:34:05.223148 sshd[1618]: Connection closed by 10.0.0.1 port 43718 Feb 13 19:34:05.223439 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:05.238165 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:43718.service: Deactivated successfully. Feb 13 19:34:05.240249 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:34:05.241950 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:34:05.251362 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:43728.service - OpenSSH per-connection server daemon (10.0.0.1:43728). Feb 13 19:34:05.252394 systemd-logind[1483]: Removed session 3. Feb 13 19:34:05.288511 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 43728 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.290624 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.295739 systemd-logind[1483]: New session 4 of user core. Feb 13 19:34:05.302125 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:34:05.356906 sshd[1625]: Connection closed by 10.0.0.1 port 43728 Feb 13 19:34:05.357581 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:05.370121 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:43728.service: Deactivated successfully. Feb 13 19:34:05.372212 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:34:05.373746 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:34:05.375085 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:43730.service - OpenSSH per-connection server daemon (10.0.0.1:43730). Feb 13 19:34:05.375849 systemd-logind[1483]: Removed session 4. Feb 13 19:34:05.420258 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 43730 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.421793 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.426303 systemd-logind[1483]: New session 5 of user core. Feb 13 19:34:05.440121 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:34:05.500827 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:34:05.501234 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:34:05.522193 sudo[1633]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:05.524447 sshd[1632]: Connection closed by 10.0.0.1 port 43730 Feb 13 19:34:05.524989 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:05.532858 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:43730.service: Deactivated successfully. Feb 13 19:34:05.534616 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:34:05.536017 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:34:05.545310 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:43740.service - OpenSSH per-connection server daemon (10.0.0.1:43740). Feb 13 19:34:05.546491 systemd-logind[1483]: Removed session 5. Feb 13 19:34:05.587316 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 43740 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.589622 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.594923 systemd-logind[1483]: New session 6 of user core. Feb 13 19:34:05.611206 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:34:05.667672 sudo[1642]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:34:05.668044 sudo[1642]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:34:05.672769 sudo[1642]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:05.679320 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:34:05.679656 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:34:05.700392 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:34:05.738340 augenrules[1664]: No rules Feb 13 19:34:05.740498 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:34:05.740748 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:34:05.742033 sudo[1641]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:05.743697 sshd[1640]: Connection closed by 10.0.0.1 port 43740 Feb 13 19:34:05.744087 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:05.759322 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:43740.service: Deactivated successfully. Feb 13 19:34:05.761440 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:34:05.763227 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:34:05.774311 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:43754.service - OpenSSH per-connection server daemon (10.0.0.1:43754). Feb 13 19:34:05.775433 systemd-logind[1483]: Removed session 6. Feb 13 19:34:05.812042 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 43754 ssh2: RSA SHA256:ENn9hOvI2hLUXcV6iHA8gc9Z4CTEPvGDMkoVtxIuYbg Feb 13 19:34:05.813715 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:05.818464 systemd-logind[1483]: New session 7 of user core. Feb 13 19:34:05.828153 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:34:05.886577 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:34:05.887027 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:34:05.913365 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:34:05.941064 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:34:05.941386 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:34:06.616612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:06.616832 systemd[1]: kubelet.service: Consumed 2.049s CPU time. Feb 13 19:34:06.635400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:06.667027 systemd[1]: Reloading requested from client PID 1716 ('systemctl') (unit session-7.scope)... Feb 13 19:34:06.667045 systemd[1]: Reloading... Feb 13 19:34:06.740002 zram_generator::config[1757]: No configuration found. Feb 13 19:34:06.992554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:07.071114 systemd[1]: Reloading finished in 403 ms. Feb 13 19:34:07.116864 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:34:07.117042 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:34:07.117366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:07.119153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:07.356500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:07.361935 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:07.408802 kubelet[1803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.410025 kubelet[1803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:07.410025 kubelet[1803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:07.410025 kubelet[1803]: I0213 19:34:07.409344 1803 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:07.703165 kubelet[1803]: I0213 19:34:07.703029 1803 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:34:07.703165 kubelet[1803]: I0213 19:34:07.703070 1803 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:07.703379 kubelet[1803]: I0213 19:34:07.703356 1803 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:34:07.727068 kubelet[1803]: I0213 19:34:07.726889 1803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:07.741649 kubelet[1803]: E0213 19:34:07.741576 1803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:34:07.741649 kubelet[1803]: I0213 19:34:07.741641 1803 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:34:07.748514 kubelet[1803]: I0213 19:34:07.748465 1803 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:07.750074 kubelet[1803]: I0213 19:34:07.750012 1803 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:07.750249 kubelet[1803]: I0213 19:34:07.750046 1803 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.147","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:34:07.750419 kubelet[1803]: I0213 19:34:07.750255 1803 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:07.750419 kubelet[1803]: I0213 19:34:07.750266 1803 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:34:07.750477 kubelet[1803]: I0213 19:34:07.750440 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.758034 kubelet[1803]: I0213 19:34:07.757961 1803 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:34:07.758034 kubelet[1803]: I0213 19:34:07.758027 1803 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:07.758034 kubelet[1803]: I0213 19:34:07.758054 1803 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:34:07.758471 kubelet[1803]: I0213 19:34:07.758067 1803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:07.758471 kubelet[1803]: E0213 19:34:07.758236 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:07.758471 kubelet[1803]: E0213 19:34:07.758366 1803 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:07.761138 kubelet[1803]: I0213 19:34:07.761099 1803 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:34:07.761547 kubelet[1803]: I0213 19:34:07.761521 1803 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:07.763600 kubelet[1803]: W0213 19:34:07.763557 1803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:34:07.765679 kubelet[1803]: I0213 19:34:07.765629 1803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:34:07.765679 kubelet[1803]: I0213 19:34:07.765681 1803 server.go:1287] "Started kubelet" Feb 13 19:34:07.766295 kubelet[1803]: I0213 19:34:07.765822 1803 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:07.767354 kubelet[1803]: I0213 19:34:07.767301 1803 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:34:07.769984 kubelet[1803]: I0213 19:34:07.769881 1803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:07.770462 kubelet[1803]: I0213 19:34:07.770356 1803 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:07.771068 kubelet[1803]: E0213 19:34:07.771016 1803 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:07.771804 kubelet[1803]: I0213 19:34:07.771768 1803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:07.772314 kubelet[1803]: I0213 19:34:07.772039 1803 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:34:07.772314 kubelet[1803]: I0213 19:34:07.772167 1803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:34:07.772314 kubelet[1803]: E0213 19:34:07.772045 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.147\" not found" Feb 13 19:34:07.772314 kubelet[1803]: I0213 19:34:07.772249 1803 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:07.772797 kubelet[1803]: I0213 19:34:07.772629 1803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:34:07.772797 kubelet[1803]: I0213 19:34:07.772638 1803 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:07.772797 kubelet[1803]: I0213 19:34:07.772737 1803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:07.775190 kubelet[1803]: I0213 19:34:07.774689 1803 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:07.785803 kubelet[1803]: E0213 19:34:07.785635 1803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.147\" not found" node="10.0.0.147" Feb 13 19:34:07.792716 kubelet[1803]: I0213 19:34:07.792662 1803 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:34:07.792716 kubelet[1803]: I0213 19:34:07.792686 1803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:07.792716 kubelet[1803]: I0213 19:34:07.792705 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:07.796654 kubelet[1803]: I0213 19:34:07.796622 1803 policy_none.go:49] "None policy: Start" Feb 13 19:34:07.796654 kubelet[1803]: I0213 19:34:07.796647 1803 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:34:07.796654 kubelet[1803]: I0213 19:34:07.796660 1803 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:07.851743 kubelet[1803]: I0213 19:34:07.851679 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:07.853235 kubelet[1803]: I0213 19:34:07.853189 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:07.853235 kubelet[1803]: I0213 19:34:07.853244 1803 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:34:07.853370 kubelet[1803]: I0213 19:34:07.853272 1803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:34:07.853370 kubelet[1803]: I0213 19:34:07.853280 1803 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:34:07.853674 kubelet[1803]: E0213 19:34:07.853383 1803 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:07.872638 kubelet[1803]: E0213 19:34:07.872514 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.147\" not found" Feb 13 19:34:07.874013 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:34:07.889076 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:34:07.893837 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:34:07.902419 kubelet[1803]: I0213 19:34:07.902368 1803 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:07.902769 kubelet[1803]: I0213 19:34:07.902705 1803 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:34:07.902769 kubelet[1803]: I0213 19:34:07.902729 1803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:07.903458 kubelet[1803]: I0213 19:34:07.903127 1803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:07.904615 kubelet[1803]: E0213 19:34:07.904578 1803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:34:07.904754 kubelet[1803]: E0213 19:34:07.904659 1803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.147\" not found" Feb 13 19:34:08.004935 kubelet[1803]: I0213 19:34:08.004790 1803 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.147" Feb 13 19:34:08.064679 kubelet[1803]: I0213 19:34:08.064629 1803 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.147" Feb 13 19:34:08.064679 kubelet[1803]: E0213 19:34:08.064664 1803 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.147\": node \"10.0.0.147\" not found" Feb 13 19:34:08.172803 kubelet[1803]: I0213 19:34:08.172763 1803 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:34:08.173268 containerd[1496]: time="2025-02-13T19:34:08.173204290Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:34:08.173608 kubelet[1803]: I0213 19:34:08.173493 1803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:34:08.418914 sudo[1675]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:08.420509 sshd[1674]: Connection closed by 10.0.0.1 port 43754 Feb 13 19:34:08.420858 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:08.424829 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:43754.service: Deactivated successfully. Feb 13 19:34:08.426878 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:34:08.427598 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:34:08.428602 systemd-logind[1483]: Removed session 7. Feb 13 19:34:08.706391 kubelet[1803]: I0213 19:34:08.706178 1803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:34:08.707012 kubelet[1803]: W0213 19:34:08.706590 1803 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:34:08.707012 kubelet[1803]: W0213 19:34:08.706687 1803 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:34:08.707012 kubelet[1803]: W0213 19:34:08.706758 1803 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:34:08.759537 kubelet[1803]: I0213 19:34:08.759466 1803 apiserver.go:52] "Watching apiserver" Feb 13 19:34:08.759747 kubelet[1803]: E0213 19:34:08.759481 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:08.765325 kubelet[1803]: E0213 19:34:08.765207 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:08.773119 kubelet[1803]: I0213 19:34:08.772519 1803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:34:08.774532 systemd[1]: Created slice kubepods-besteffort-pod1eaa5caa_35cf_4690_91e6_ff359e02b91f.slice - libcontainer container kubepods-besteffort-pod1eaa5caa_35cf_4690_91e6_ff359e02b91f.slice. Feb 13 19:34:08.776641 kubelet[1803]: I0213 19:34:08.776601 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-var-lib-calico\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776693 kubelet[1803]: I0213 19:34:08.776641 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-cni-net-dir\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776693 kubelet[1803]: I0213 19:34:08.776665 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-cni-log-dir\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776693 kubelet[1803]: I0213 19:34:08.776681 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4smlk\" (UniqueName: \"kubernetes.io/projected/1eaa5caa-35cf-4690-91e6-ff359e02b91f-kube-api-access-4smlk\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776784 kubelet[1803]: I0213 19:34:08.776700 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/eabcb63e-3a6f-48e9-b953-604013f3f97d-varrun\") pod \"csi-node-driver-6zx89\" (UID: \"eabcb63e-3a6f-48e9-b953-604013f3f97d\") " pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:08.776784 kubelet[1803]: I0213 19:34:08.776719 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-lib-modules\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776784 kubelet[1803]: I0213 19:34:08.776733 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-policysync\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776784 kubelet[1803]: I0213 19:34:08.776754 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1eaa5caa-35cf-4690-91e6-ff359e02b91f-node-certs\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776884 kubelet[1803]: I0213 19:34:08.776790 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/eabcb63e-3a6f-48e9-b953-604013f3f97d-socket-dir\") pod \"csi-node-driver-6zx89\" (UID: \"eabcb63e-3a6f-48e9-b953-604013f3f97d\") " pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:08.776884 kubelet[1803]: I0213 19:34:08.776805 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b3afcf46-f070-4110-b9f8-6ae306b1afd0-lib-modules\") pod \"kube-proxy-t4qtb\" (UID: \"b3afcf46-f070-4110-b9f8-6ae306b1afd0\") " pod="kube-system/kube-proxy-t4qtb" Feb 13 19:34:08.776884 kubelet[1803]: I0213 19:34:08.776819 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1eaa5caa-35cf-4690-91e6-ff359e02b91f-tigera-ca-bundle\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776884 kubelet[1803]: I0213 19:34:08.776852 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-var-run-calico\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.776884 kubelet[1803]: I0213 19:34:08.776872 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b3afcf46-f070-4110-b9f8-6ae306b1afd0-xtables-lock\") pod \"kube-proxy-t4qtb\" (UID: \"b3afcf46-f070-4110-b9f8-6ae306b1afd0\") " pod="kube-system/kube-proxy-t4qtb" Feb 13 19:34:08.777023 kubelet[1803]: I0213 19:34:08.776891 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-flexvol-driver-host\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.777023 kubelet[1803]: I0213 19:34:08.776911 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlsz\" (UniqueName: \"kubernetes.io/projected/eabcb63e-3a6f-48e9-b953-604013f3f97d-kube-api-access-9xlsz\") pod \"csi-node-driver-6zx89\" (UID: \"eabcb63e-3a6f-48e9-b953-604013f3f97d\") " pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:08.777023 kubelet[1803]: I0213 19:34:08.776930 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b3afcf46-f070-4110-b9f8-6ae306b1afd0-kube-proxy\") pod \"kube-proxy-t4qtb\" (UID: \"b3afcf46-f070-4110-b9f8-6ae306b1afd0\") " pod="kube-system/kube-proxy-t4qtb" Feb 13 19:34:08.777023 kubelet[1803]: I0213 19:34:08.776946 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/eabcb63e-3a6f-48e9-b953-604013f3f97d-registration-dir\") pod \"csi-node-driver-6zx89\" (UID: \"eabcb63e-3a6f-48e9-b953-604013f3f97d\") " pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:08.777023 kubelet[1803]: I0213 19:34:08.776987 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gnp\" (UniqueName: \"kubernetes.io/projected/b3afcf46-f070-4110-b9f8-6ae306b1afd0-kube-api-access-42gnp\") pod \"kube-proxy-t4qtb\" (UID: \"b3afcf46-f070-4110-b9f8-6ae306b1afd0\") " pod="kube-system/kube-proxy-t4qtb" Feb 13 19:34:08.777159 kubelet[1803]: I0213 19:34:08.777003 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-xtables-lock\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.777159 kubelet[1803]: I0213 19:34:08.777019 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1eaa5caa-35cf-4690-91e6-ff359e02b91f-cni-bin-dir\") pod \"calico-node-h9nw2\" (UID: \"1eaa5caa-35cf-4690-91e6-ff359e02b91f\") " pod="calico-system/calico-node-h9nw2" Feb 13 19:34:08.777159 kubelet[1803]: I0213 19:34:08.777035 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eabcb63e-3a6f-48e9-b953-604013f3f97d-kubelet-dir\") pod \"csi-node-driver-6zx89\" (UID: \"eabcb63e-3a6f-48e9-b953-604013f3f97d\") " pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:08.788596 systemd[1]: Created slice kubepods-besteffort-podb3afcf46_f070_4110_b9f8_6ae306b1afd0.slice - libcontainer container kubepods-besteffort-podb3afcf46_f070_4110_b9f8_6ae306b1afd0.slice. Feb 13 19:34:08.878853 kubelet[1803]: E0213 19:34:08.878593 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.878853 kubelet[1803]: W0213 19:34:08.878630 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.878853 kubelet[1803]: E0213 19:34:08.878683 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.879161 kubelet[1803]: E0213 19:34:08.879146 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.879198 kubelet[1803]: W0213 19:34:08.879167 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.879230 kubelet[1803]: E0213 19:34:08.879196 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.879602 kubelet[1803]: E0213 19:34:08.879578 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.879602 kubelet[1803]: W0213 19:34:08.879593 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.879697 kubelet[1803]: E0213 19:34:08.879609 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.879890 kubelet[1803]: E0213 19:34:08.879867 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.879890 kubelet[1803]: W0213 19:34:08.879887 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.880020 kubelet[1803]: E0213 19:34:08.879994 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.880273 kubelet[1803]: E0213 19:34:08.880252 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.880273 kubelet[1803]: W0213 19:34:08.880265 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.880365 kubelet[1803]: E0213 19:34:08.880300 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.880550 kubelet[1803]: E0213 19:34:08.880530 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.880550 kubelet[1803]: W0213 19:34:08.880544 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.880643 kubelet[1803]: E0213 19:34:08.880622 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.880916 kubelet[1803]: E0213 19:34:08.880781 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.880916 kubelet[1803]: W0213 19:34:08.880801 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881047 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.881643 kubelet[1803]: W0213 19:34:08.881062 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881105 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881291 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.881643 kubelet[1803]: W0213 19:34:08.881301 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881335 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881352 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881525 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.881643 kubelet[1803]: W0213 19:34:08.881535 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.881643 kubelet[1803]: E0213 19:34:08.881580 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.882176 kubelet[1803]: E0213 19:34:08.881773 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.882176 kubelet[1803]: W0213 19:34:08.881785 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.882176 kubelet[1803]: E0213 19:34:08.881832 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.882176 kubelet[1803]: E0213 19:34:08.882015 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.882176 kubelet[1803]: W0213 19:34:08.882026 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.882176 kubelet[1803]: E0213 19:34:08.882069 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.882409 kubelet[1803]: E0213 19:34:08.882271 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.882409 kubelet[1803]: W0213 19:34:08.882283 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.882409 kubelet[1803]: E0213 19:34:08.882333 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.882597 kubelet[1803]: E0213 19:34:08.882551 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.882597 kubelet[1803]: W0213 19:34:08.882565 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.882688 kubelet[1803]: E0213 19:34:08.882606 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.882856 kubelet[1803]: E0213 19:34:08.882822 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.882856 kubelet[1803]: W0213 19:34:08.882839 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.882856 kubelet[1803]: E0213 19:34:08.882915 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.883311 kubelet[1803]: E0213 19:34:08.883153 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.883311 kubelet[1803]: W0213 19:34:08.883167 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.883311 kubelet[1803]: E0213 19:34:08.883226 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.883425 kubelet[1803]: E0213 19:34:08.883418 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.883609 kubelet[1803]: W0213 19:34:08.883429 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.883609 kubelet[1803]: E0213 19:34:08.883472 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.883694 kubelet[1803]: E0213 19:34:08.883672 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.883694 kubelet[1803]: W0213 19:34:08.883689 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.883754 kubelet[1803]: E0213 19:34:08.883729 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.885218 kubelet[1803]: E0213 19:34:08.884756 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.885218 kubelet[1803]: W0213 19:34:08.884775 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.885218 kubelet[1803]: E0213 19:34:08.884916 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.885218 kubelet[1803]: E0213 19:34:08.885176 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.885218 kubelet[1803]: W0213 19:34:08.885186 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.885492 kubelet[1803]: E0213 19:34:08.885272 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.885492 kubelet[1803]: E0213 19:34:08.885422 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.885492 kubelet[1803]: W0213 19:34:08.885430 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.885565 kubelet[1803]: E0213 19:34:08.885506 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.885851 kubelet[1803]: E0213 19:34:08.885686 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.885851 kubelet[1803]: W0213 19:34:08.885700 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.885851 kubelet[1803]: E0213 19:34:08.885755 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.886055 kubelet[1803]: E0213 19:34:08.886002 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.886055 kubelet[1803]: W0213 19:34:08.886016 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.886179 kubelet[1803]: E0213 19:34:08.886151 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.887007 kubelet[1803]: E0213 19:34:08.886477 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.887007 kubelet[1803]: W0213 19:34:08.886497 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.887007 kubelet[1803]: E0213 19:34:08.886702 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.887007 kubelet[1803]: E0213 19:34:08.886777 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.887007 kubelet[1803]: W0213 19:34:08.886786 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.887197 kubelet[1803]: E0213 19:34:08.887164 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.887345 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889470 kubelet[1803]: W0213 19:34:08.887361 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.887626 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889470 kubelet[1803]: W0213 19:34:08.887641 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.888012 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889470 kubelet[1803]: W0213 19:34:08.888024 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.888236 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889470 kubelet[1803]: W0213 19:34:08.888262 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.888474 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889470 kubelet[1803]: W0213 19:34:08.888485 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889470 kubelet[1803]: E0213 19:34:08.888731 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889826 kubelet[1803]: W0213 19:34:08.888742 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889025 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889826 kubelet[1803]: W0213 19:34:08.889035 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889237 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889826 kubelet[1803]: W0213 19:34:08.889246 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889418 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.889826 kubelet[1803]: W0213 19:34:08.889426 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889531 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889561 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.889826 kubelet[1803]: E0213 19:34:08.889574 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.891553 kubelet[1803]: E0213 19:34:08.891500 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892135 kubelet[1803]: E0213 19:34:08.892041 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.892135 kubelet[1803]: W0213 19:34:08.892060 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.892135 kubelet[1803]: E0213 19:34:08.892074 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892290 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.892837 kubelet[1803]: W0213 19:34:08.892303 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892312 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892340 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892579 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.892837 kubelet[1803]: W0213 19:34:08.892593 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892604 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892803 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.892837 kubelet[1803]: E0213 19:34:08.892817 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.893515 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.893552 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.893933 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.894449 kubelet[1803]: W0213 19:34:08.893951 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.894030 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.894340 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.894449 kubelet[1803]: W0213 19:34:08.894351 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.894449 kubelet[1803]: E0213 19:34:08.894365 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.894702 kubelet[1803]: E0213 19:34:08.894617 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.894702 kubelet[1803]: W0213 19:34:08.894629 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.894702 kubelet[1803]: E0213 19:34:08.894642 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.895030 kubelet[1803]: E0213 19:34:08.894928 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.895030 kubelet[1803]: W0213 19:34:08.894946 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.895030 kubelet[1803]: E0213 19:34:08.894958 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:08.895347 kubelet[1803]: E0213 19:34:08.895309 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:08.895347 kubelet[1803]: W0213 19:34:08.895338 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:08.895550 kubelet[1803]: E0213 19:34:08.895367 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:09.084746 kubelet[1803]: E0213 19:34:09.084526 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.086946 containerd[1496]: time="2025-02-13T19:34:09.086846725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h9nw2,Uid:1eaa5caa-35cf-4690-91e6-ff359e02b91f,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:09.091655 kubelet[1803]: E0213 19:34:09.091320 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.092519 containerd[1496]: time="2025-02-13T19:34:09.092448541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4qtb,Uid:b3afcf46-f070-4110-b9f8-6ae306b1afd0,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:09.759834 kubelet[1803]: E0213 19:34:09.759769 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:10.462927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051552604.mount: Deactivated successfully. Feb 13 19:34:10.473734 containerd[1496]: time="2025-02-13T19:34:10.473645268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:10.475514 containerd[1496]: time="2025-02-13T19:34:10.475471933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:34:10.476510 containerd[1496]: time="2025-02-13T19:34:10.476431112Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:10.477568 containerd[1496]: time="2025-02-13T19:34:10.477513251Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:10.478640 containerd[1496]: time="2025-02-13T19:34:10.478588668Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:34:10.480432 containerd[1496]: time="2025-02-13T19:34:10.480393954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:34:10.482295 containerd[1496]: time="2025-02-13T19:34:10.482250556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.389644549s" Feb 13 19:34:10.483655 containerd[1496]: time="2025-02-13T19:34:10.483606518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.396051214s" Feb 13 19:34:10.655152 containerd[1496]: time="2025-02-13T19:34:10.655022094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655116160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655132691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655237207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655595419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655717388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655737465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:10.655883 containerd[1496]: time="2025-02-13T19:34:10.655837112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:10.760314 kubelet[1803]: E0213 19:34:10.760137 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:10.853818 kubelet[1803]: E0213 19:34:10.853711 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:10.897375 systemd[1]: Started cri-containerd-6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849.scope - libcontainer container 6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849. Feb 13 19:34:10.905328 systemd[1]: Started cri-containerd-4737e3e7ddd9287180403dd9ffbb9bec299d4c3f49f5cf5fe76acec23a0edec8.scope - libcontainer container 4737e3e7ddd9287180403dd9ffbb9bec299d4c3f49f5cf5fe76acec23a0edec8. Feb 13 19:34:10.939449 containerd[1496]: time="2025-02-13T19:34:10.939226038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t4qtb,Uid:b3afcf46-f070-4110-b9f8-6ae306b1afd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"4737e3e7ddd9287180403dd9ffbb9bec299d4c3f49f5cf5fe76acec23a0edec8\"" Feb 13 19:34:10.941486 kubelet[1803]: E0213 19:34:10.941390 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:10.942527 containerd[1496]: time="2025-02-13T19:34:10.942416321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h9nw2,Uid:1eaa5caa-35cf-4690-91e6-ff359e02b91f,Namespace:calico-system,Attempt:0,} returns sandbox id \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\"" Feb 13 19:34:10.943259 containerd[1496]: time="2025-02-13T19:34:10.943238553Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:34:10.943774 kubelet[1803]: E0213 19:34:10.943749 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:11.761148 kubelet[1803]: E0213 19:34:11.761059 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:12.636189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691652389.mount: Deactivated successfully. Feb 13 19:34:12.761462 kubelet[1803]: E0213 19:34:12.761400 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:12.854291 kubelet[1803]: E0213 19:34:12.854186 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:13.627375 containerd[1496]: time="2025-02-13T19:34:13.627281822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:13.640737 containerd[1496]: time="2025-02-13T19:34:13.640609867Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:34:13.665188 containerd[1496]: time="2025-02-13T19:34:13.665099784Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:13.686692 containerd[1496]: time="2025-02-13T19:34:13.686622146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:13.687531 containerd[1496]: time="2025-02-13T19:34:13.687500543Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.744154219s" Feb 13 19:34:13.687605 containerd[1496]: time="2025-02-13T19:34:13.687536431Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:34:13.688748 containerd[1496]: time="2025-02-13T19:34:13.688607229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:34:13.689900 containerd[1496]: time="2025-02-13T19:34:13.689862293Z" level=info msg="CreateContainer within sandbox \"4737e3e7ddd9287180403dd9ffbb9bec299d4c3f49f5cf5fe76acec23a0edec8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:34:13.762453 kubelet[1803]: E0213 19:34:13.762362 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:13.889667 containerd[1496]: time="2025-02-13T19:34:13.889534038Z" level=info msg="CreateContainer within sandbox \"4737e3e7ddd9287180403dd9ffbb9bec299d4c3f49f5cf5fe76acec23a0edec8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"95fdc32bd6143de0ae34f05aef68220d3a37f624912bbd73430d947b09a6fa84\"" Feb 13 19:34:13.890416 containerd[1496]: time="2025-02-13T19:34:13.890377911Z" level=info msg="StartContainer for \"95fdc32bd6143de0ae34f05aef68220d3a37f624912bbd73430d947b09a6fa84\"" Feb 13 19:34:13.940210 systemd[1]: Started cri-containerd-95fdc32bd6143de0ae34f05aef68220d3a37f624912bbd73430d947b09a6fa84.scope - libcontainer container 95fdc32bd6143de0ae34f05aef68220d3a37f624912bbd73430d947b09a6fa84. Feb 13 19:34:13.999742 containerd[1496]: time="2025-02-13T19:34:13.999676084Z" level=info msg="StartContainer for \"95fdc32bd6143de0ae34f05aef68220d3a37f624912bbd73430d947b09a6fa84\" returns successfully" Feb 13 19:34:14.763093 kubelet[1803]: E0213 19:34:14.763010 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:14.853801 kubelet[1803]: E0213 19:34:14.853694 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:14.869427 kubelet[1803]: E0213 19:34:14.869388 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:14.909246 kubelet[1803]: E0213 19:34:14.909189 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.909246 kubelet[1803]: W0213 19:34:14.909219 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.909246 kubelet[1803]: E0213 19:34:14.909249 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.909676 kubelet[1803]: E0213 19:34:14.909613 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.909676 kubelet[1803]: W0213 19:34:14.909653 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.909766 kubelet[1803]: E0213 19:34:14.909690 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.910048 kubelet[1803]: E0213 19:34:14.910029 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.910048 kubelet[1803]: W0213 19:34:14.910044 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.910181 kubelet[1803]: E0213 19:34:14.910056 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.910382 kubelet[1803]: E0213 19:34:14.910344 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.910382 kubelet[1803]: W0213 19:34:14.910358 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.910382 kubelet[1803]: E0213 19:34:14.910369 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.910641 kubelet[1803]: E0213 19:34:14.910607 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.910641 kubelet[1803]: W0213 19:34:14.910622 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.910641 kubelet[1803]: E0213 19:34:14.910632 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.910914 kubelet[1803]: E0213 19:34:14.910881 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.910914 kubelet[1803]: W0213 19:34:14.910900 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.910914 kubelet[1803]: E0213 19:34:14.910916 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.911236 kubelet[1803]: E0213 19:34:14.911217 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.911236 kubelet[1803]: W0213 19:34:14.911231 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.911305 kubelet[1803]: E0213 19:34:14.911241 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.911495 kubelet[1803]: E0213 19:34:14.911470 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.911495 kubelet[1803]: W0213 19:34:14.911483 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.911495 kubelet[1803]: E0213 19:34:14.911491 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.911766 kubelet[1803]: E0213 19:34:14.911748 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.911766 kubelet[1803]: W0213 19:34:14.911760 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.911870 kubelet[1803]: E0213 19:34:14.911786 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.912117 kubelet[1803]: E0213 19:34:14.912094 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.912117 kubelet[1803]: W0213 19:34:14.912110 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.912202 kubelet[1803]: E0213 19:34:14.912121 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.912411 kubelet[1803]: E0213 19:34:14.912391 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.912411 kubelet[1803]: W0213 19:34:14.912407 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.912490 kubelet[1803]: E0213 19:34:14.912419 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.912733 kubelet[1803]: E0213 19:34:14.912706 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.912733 kubelet[1803]: W0213 19:34:14.912723 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.912733 kubelet[1803]: E0213 19:34:14.912734 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.913042 kubelet[1803]: E0213 19:34:14.913018 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.913042 kubelet[1803]: W0213 19:34:14.913036 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.913132 kubelet[1803]: E0213 19:34:14.913049 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.913317 kubelet[1803]: E0213 19:34:14.913300 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.913317 kubelet[1803]: W0213 19:34:14.913312 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.913379 kubelet[1803]: E0213 19:34:14.913322 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.913578 kubelet[1803]: E0213 19:34:14.913558 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.913578 kubelet[1803]: W0213 19:34:14.913574 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.913651 kubelet[1803]: E0213 19:34:14.913586 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.913862 kubelet[1803]: E0213 19:34:14.913841 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.913862 kubelet[1803]: W0213 19:34:14.913856 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.913932 kubelet[1803]: E0213 19:34:14.913868 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.914158 kubelet[1803]: E0213 19:34:14.914137 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.914158 kubelet[1803]: W0213 19:34:14.914152 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.914248 kubelet[1803]: E0213 19:34:14.914163 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.914438 kubelet[1803]: E0213 19:34:14.914421 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.914438 kubelet[1803]: W0213 19:34:14.914434 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.914501 kubelet[1803]: E0213 19:34:14.914443 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.914669 kubelet[1803]: E0213 19:34:14.914652 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.914669 kubelet[1803]: W0213 19:34:14.914662 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.914669 kubelet[1803]: E0213 19:34:14.914670 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.914903 kubelet[1803]: E0213 19:34:14.914883 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.914903 kubelet[1803]: W0213 19:34:14.914897 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.915014 kubelet[1803]: E0213 19:34:14.914909 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.924345 kubelet[1803]: E0213 19:34:14.924296 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.924345 kubelet[1803]: W0213 19:34:14.924324 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.924345 kubelet[1803]: E0213 19:34:14.924347 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.924687 kubelet[1803]: E0213 19:34:14.924659 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.924687 kubelet[1803]: W0213 19:34:14.924682 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.924765 kubelet[1803]: E0213 19:34:14.924710 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.925135 kubelet[1803]: E0213 19:34:14.925113 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.925135 kubelet[1803]: W0213 19:34:14.925127 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.925202 kubelet[1803]: E0213 19:34:14.925143 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.925415 kubelet[1803]: E0213 19:34:14.925373 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.925415 kubelet[1803]: W0213 19:34:14.925393 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.925415 kubelet[1803]: E0213 19:34:14.925410 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.925682 kubelet[1803]: E0213 19:34:14.925641 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.925682 kubelet[1803]: W0213 19:34:14.925651 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.925682 kubelet[1803]: E0213 19:34:14.925664 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.925923 kubelet[1803]: E0213 19:34:14.925887 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.925923 kubelet[1803]: W0213 19:34:14.925909 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.925923 kubelet[1803]: E0213 19:34:14.925929 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.926342 kubelet[1803]: E0213 19:34:14.926291 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.926342 kubelet[1803]: W0213 19:34:14.926305 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.926342 kubelet[1803]: E0213 19:34:14.926323 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.926707 kubelet[1803]: E0213 19:34:14.926664 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.926707 kubelet[1803]: W0213 19:34:14.926696 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.926785 kubelet[1803]: E0213 19:34:14.926715 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.926961 kubelet[1803]: E0213 19:34:14.926940 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.926961 kubelet[1803]: W0213 19:34:14.926955 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.927068 kubelet[1803]: E0213 19:34:14.927018 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.927314 kubelet[1803]: E0213 19:34:14.927296 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.927314 kubelet[1803]: W0213 19:34:14.927311 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.927398 kubelet[1803]: E0213 19:34:14.927330 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.927578 kubelet[1803]: E0213 19:34:14.927559 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.927578 kubelet[1803]: W0213 19:34:14.927576 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.927632 kubelet[1803]: E0213 19:34:14.927590 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:14.927923 kubelet[1803]: E0213 19:34:14.927904 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:14.927923 kubelet[1803]: W0213 19:34:14.927921 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:14.927996 kubelet[1803]: E0213 19:34:14.927934 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.118607 kubelet[1803]: I0213 19:34:15.118410 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t4qtb" podStartSLOduration=4.372637104 podStartE2EDuration="7.118384942s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="2025-02-13 19:34:10.942649758 +0000 UTC m=+3.576139709" lastFinishedPulling="2025-02-13 19:34:13.688397596 +0000 UTC m=+6.321887547" observedRunningTime="2025-02-13 19:34:15.118112001 +0000 UTC m=+7.751601952" watchObservedRunningTime="2025-02-13 19:34:15.118384942 +0000 UTC m=+7.751874893" Feb 13 19:34:15.763541 kubelet[1803]: E0213 19:34:15.763475 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:15.870559 kubelet[1803]: E0213 19:34:15.870524 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.883442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502618174.mount: Deactivated successfully. Feb 13 19:34:15.924244 kubelet[1803]: E0213 19:34:15.924197 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.924244 kubelet[1803]: W0213 19:34:15.924230 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.924244 kubelet[1803]: E0213 19:34:15.924256 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.924679 kubelet[1803]: E0213 19:34:15.924637 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.924679 kubelet[1803]: W0213 19:34:15.924668 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.924850 kubelet[1803]: E0213 19:34:15.924698 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.925082 kubelet[1803]: E0213 19:34:15.924977 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.925082 kubelet[1803]: W0213 19:34:15.924991 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.925082 kubelet[1803]: E0213 19:34:15.925004 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.925285 kubelet[1803]: E0213 19:34:15.925263 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.925285 kubelet[1803]: W0213 19:34:15.925276 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.925285 kubelet[1803]: E0213 19:34:15.925287 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.925499 kubelet[1803]: E0213 19:34:15.925493 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.925536 kubelet[1803]: W0213 19:34:15.925501 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.925536 kubelet[1803]: E0213 19:34:15.925510 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.925759 kubelet[1803]: E0213 19:34:15.925738 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.925759 kubelet[1803]: W0213 19:34:15.925754 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.925932 kubelet[1803]: E0213 19:34:15.925764 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.926027 kubelet[1803]: E0213 19:34:15.926001 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.926027 kubelet[1803]: W0213 19:34:15.926019 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.926027 kubelet[1803]: E0213 19:34:15.926029 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.926258 kubelet[1803]: E0213 19:34:15.926243 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.926258 kubelet[1803]: W0213 19:34:15.926252 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.926556 kubelet[1803]: E0213 19:34:15.926261 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.926556 kubelet[1803]: E0213 19:34:15.926477 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.926556 kubelet[1803]: W0213 19:34:15.926485 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.926556 kubelet[1803]: E0213 19:34:15.926493 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.926884 kubelet[1803]: E0213 19:34:15.926669 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.926884 kubelet[1803]: W0213 19:34:15.926677 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.926884 kubelet[1803]: E0213 19:34:15.926685 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.926884 kubelet[1803]: E0213 19:34:15.926863 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.926884 kubelet[1803]: W0213 19:34:15.926871 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.926884 kubelet[1803]: E0213 19:34:15.926881 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.927111 kubelet[1803]: E0213 19:34:15.927087 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.927111 kubelet[1803]: W0213 19:34:15.927096 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.927111 kubelet[1803]: E0213 19:34:15.927106 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.927312 kubelet[1803]: E0213 19:34:15.927295 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.927312 kubelet[1803]: W0213 19:34:15.927307 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.927380 kubelet[1803]: E0213 19:34:15.927317 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.927508 kubelet[1803]: E0213 19:34:15.927493 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.927508 kubelet[1803]: W0213 19:34:15.927504 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.927569 kubelet[1803]: E0213 19:34:15.927512 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.927717 kubelet[1803]: E0213 19:34:15.927700 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.927717 kubelet[1803]: W0213 19:34:15.927715 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.927793 kubelet[1803]: E0213 19:34:15.927727 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.928283 kubelet[1803]: E0213 19:34:15.928073 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.928283 kubelet[1803]: W0213 19:34:15.928104 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.928283 kubelet[1803]: E0213 19:34:15.928133 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.928790 kubelet[1803]: E0213 19:34:15.928770 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.928790 kubelet[1803]: W0213 19:34:15.928785 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.928790 kubelet[1803]: E0213 19:34:15.928795 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.929070 kubelet[1803]: E0213 19:34:15.929051 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.929070 kubelet[1803]: W0213 19:34:15.929068 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.929149 kubelet[1803]: E0213 19:34:15.929082 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.929310 kubelet[1803]: E0213 19:34:15.929297 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.929310 kubelet[1803]: W0213 19:34:15.929309 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.929371 kubelet[1803]: E0213 19:34:15.929319 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.929554 kubelet[1803]: E0213 19:34:15.929540 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.929694 kubelet[1803]: W0213 19:34:15.929603 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.929694 kubelet[1803]: E0213 19:34:15.929615 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.930958 kubelet[1803]: E0213 19:34:15.930891 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.930958 kubelet[1803]: W0213 19:34:15.930919 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.930958 kubelet[1803]: E0213 19:34:15.930930 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.931254 kubelet[1803]: E0213 19:34:15.931236 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.931254 kubelet[1803]: W0213 19:34:15.931251 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.931329 kubelet[1803]: E0213 19:34:15.931269 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.931561 kubelet[1803]: E0213 19:34:15.931545 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.931561 kubelet[1803]: W0213 19:34:15.931556 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.931628 kubelet[1803]: E0213 19:34:15.931573 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.931802 kubelet[1803]: E0213 19:34:15.931781 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.931802 kubelet[1803]: W0213 19:34:15.931794 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.931910 kubelet[1803]: E0213 19:34:15.931809 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.932099 kubelet[1803]: E0213 19:34:15.932083 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.932099 kubelet[1803]: W0213 19:34:15.932095 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.932169 kubelet[1803]: E0213 19:34:15.932112 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.932384 kubelet[1803]: E0213 19:34:15.932369 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.932384 kubelet[1803]: W0213 19:34:15.932381 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.932894 kubelet[1803]: E0213 19:34:15.932457 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.932894 kubelet[1803]: E0213 19:34:15.932680 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.932894 kubelet[1803]: W0213 19:34:15.932693 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.932894 kubelet[1803]: E0213 19:34:15.932704 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.933016 kubelet[1803]: E0213 19:34:15.932977 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.933016 kubelet[1803]: W0213 19:34:15.932988 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.933016 kubelet[1803]: E0213 19:34:15.933006 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.933291 kubelet[1803]: E0213 19:34:15.933273 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.933452 kubelet[1803]: W0213 19:34:15.933339 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.933452 kubelet[1803]: E0213 19:34:15.933361 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.933607 kubelet[1803]: E0213 19:34:15.933594 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.933742 kubelet[1803]: W0213 19:34:15.933651 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.933742 kubelet[1803]: E0213 19:34:15.933665 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.934007 kubelet[1803]: E0213 19:34:15.933994 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.934081 kubelet[1803]: W0213 19:34:15.934068 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.934433 kubelet[1803]: E0213 19:34:15.934122 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.935263 kubelet[1803]: E0213 19:34:15.935241 1803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:15.935263 kubelet[1803]: W0213 19:34:15.935259 1803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:15.935344 kubelet[1803]: E0213 19:34:15.935271 1803 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:15.988060 containerd[1496]: time="2025-02-13T19:34:15.987983761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.988758 containerd[1496]: time="2025-02-13T19:34:15.988710153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:34:15.990627 containerd[1496]: time="2025-02-13T19:34:15.990581262Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.992846 containerd[1496]: time="2025-02-13T19:34:15.992807276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:15.993522 containerd[1496]: time="2025-02-13T19:34:15.993455642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.304815001s" Feb 13 19:34:15.993522 containerd[1496]: time="2025-02-13T19:34:15.993505526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:34:15.995551 containerd[1496]: time="2025-02-13T19:34:15.995519853Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:34:16.014035 containerd[1496]: time="2025-02-13T19:34:16.013882590Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599\"" Feb 13 19:34:16.014825 containerd[1496]: time="2025-02-13T19:34:16.014732053Z" level=info msg="StartContainer for \"a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599\"" Feb 13 19:34:16.070184 systemd[1]: Started cri-containerd-a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599.scope - libcontainer container a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599. Feb 13 19:34:16.132278 containerd[1496]: time="2025-02-13T19:34:16.132202322Z" level=info msg="StartContainer for \"a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599\" returns successfully" Feb 13 19:34:16.149066 systemd[1]: cri-containerd-a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599.scope: Deactivated successfully. Feb 13 19:34:16.575836 containerd[1496]: time="2025-02-13T19:34:16.575707542Z" level=info msg="shim disconnected" id=a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599 namespace=k8s.io Feb 13 19:34:16.575836 containerd[1496]: time="2025-02-13T19:34:16.575828279Z" level=warning msg="cleaning up after shim disconnected" id=a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599 namespace=k8s.io Feb 13 19:34:16.575836 containerd[1496]: time="2025-02-13T19:34:16.575843778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:16.764564 kubelet[1803]: E0213 19:34:16.764487 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:16.853783 kubelet[1803]: E0213 19:34:16.853580 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:16.861185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a855c75f089d5d7b93fb09169d3423cccc5d854a84753a5019c520b582014599-rootfs.mount: Deactivated successfully. Feb 13 19:34:16.873587 kubelet[1803]: E0213 19:34:16.873547 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.874281 containerd[1496]: time="2025-02-13T19:34:16.874244496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:34:17.765487 kubelet[1803]: E0213 19:34:17.765384 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:18.766064 kubelet[1803]: E0213 19:34:18.765986 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:18.854760 kubelet[1803]: E0213 19:34:18.854134 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:19.766574 kubelet[1803]: E0213 19:34:19.766527 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:20.767738 kubelet[1803]: E0213 19:34:20.767644 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:20.853615 kubelet[1803]: E0213 19:34:20.853555 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:21.360310 containerd[1496]: time="2025-02-13T19:34:21.360143157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:21.361093 containerd[1496]: time="2025-02-13T19:34:21.361023819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:34:21.362503 containerd[1496]: time="2025-02-13T19:34:21.362449903Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:21.366438 containerd[1496]: time="2025-02-13T19:34:21.366389080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:21.368006 containerd[1496]: time="2025-02-13T19:34:21.367877953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.493570709s" Feb 13 19:34:21.368006 containerd[1496]: time="2025-02-13T19:34:21.367998318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:34:21.370939 containerd[1496]: time="2025-02-13T19:34:21.370900952Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:34:21.396189 containerd[1496]: time="2025-02-13T19:34:21.396103335Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced\"" Feb 13 19:34:21.396836 containerd[1496]: time="2025-02-13T19:34:21.396752703Z" level=info msg="StartContainer for \"ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced\"" Feb 13 19:34:21.442254 systemd[1]: Started cri-containerd-ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced.scope - libcontainer container ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced. Feb 13 19:34:21.489480 containerd[1496]: time="2025-02-13T19:34:21.489357781Z" level=info msg="StartContainer for \"ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced\" returns successfully" Feb 13 19:34:21.768769 kubelet[1803]: E0213 19:34:21.768580 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:21.885359 kubelet[1803]: E0213 19:34:21.885315 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:22.769641 kubelet[1803]: E0213 19:34:22.769549 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:22.853955 kubelet[1803]: E0213 19:34:22.853864 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:22.887759 kubelet[1803]: E0213 19:34:22.887696 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:23.509399 systemd[1]: cri-containerd-ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced.scope: Deactivated successfully. Feb 13 19:34:23.509814 systemd[1]: cri-containerd-ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced.scope: Consumed 1.277s CPU time. Feb 13 19:34:23.523162 kubelet[1803]: I0213 19:34:23.523089 1803 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:34:23.540908 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced-rootfs.mount: Deactivated successfully. Feb 13 19:34:23.770706 kubelet[1803]: E0213 19:34:23.770516 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:24.104644 containerd[1496]: time="2025-02-13T19:34:24.104471422Z" level=info msg="shim disconnected" id=ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced namespace=k8s.io Feb 13 19:34:24.104644 containerd[1496]: time="2025-02-13T19:34:24.104540311Z" level=warning msg="cleaning up after shim disconnected" id=ae096fb390e475ed2e034b9a414034dc7da5ce81eba9652d500ea72d51157ced namespace=k8s.io Feb 13 19:34:24.104644 containerd[1496]: time="2025-02-13T19:34:24.104548958Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:24.771804 kubelet[1803]: E0213 19:34:24.771714 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:24.860251 systemd[1]: Created slice kubepods-besteffort-podeabcb63e_3a6f_48e9_b953_604013f3f97d.slice - libcontainer container kubepods-besteffort-podeabcb63e_3a6f_48e9_b953_604013f3f97d.slice. Feb 13 19:34:24.862865 containerd[1496]: time="2025-02-13T19:34:24.862821840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:24.893735 kubelet[1803]: E0213 19:34:24.893685 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:24.894680 containerd[1496]: time="2025-02-13T19:34:24.894619602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:34:25.507327 containerd[1496]: time="2025-02-13T19:34:25.507261314Z" level=error msg="Failed to destroy network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:25.507942 containerd[1496]: time="2025-02-13T19:34:25.507895343Z" level=error msg="encountered an error cleaning up failed sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:25.508055 containerd[1496]: time="2025-02-13T19:34:25.508006461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:25.508354 kubelet[1803]: E0213 19:34:25.508307 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:25.508443 kubelet[1803]: E0213 19:34:25.508396 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:25.508443 kubelet[1803]: E0213 19:34:25.508428 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:25.508608 kubelet[1803]: E0213 19:34:25.508496 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:25.509093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737-shm.mount: Deactivated successfully. Feb 13 19:34:25.636522 systemd[1]: Created slice kubepods-besteffort-podf53fadde_9cb7_4b42_88db_555315723510.slice - libcontainer container kubepods-besteffort-podf53fadde_9cb7_4b42_88db_555315723510.slice. Feb 13 19:34:25.773119 kubelet[1803]: E0213 19:34:25.772918 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:25.822269 kubelet[1803]: I0213 19:34:25.822188 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54fp6\" (UniqueName: \"kubernetes.io/projected/f53fadde-9cb7-4b42-88db-555315723510-kube-api-access-54fp6\") pod \"nginx-deployment-7fcdb87857-vjtzw\" (UID: \"f53fadde-9cb7-4b42-88db-555315723510\") " pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:25.896179 kubelet[1803]: I0213 19:34:25.896126 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737" Feb 13 19:34:25.896945 containerd[1496]: time="2025-02-13T19:34:25.896904093Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:25.897258 containerd[1496]: time="2025-02-13T19:34:25.897223152Z" level=info msg="Ensure that sandbox a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737 in task-service has been cleanup successfully" Feb 13 19:34:25.897526 containerd[1496]: time="2025-02-13T19:34:25.897477358Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:25.897526 containerd[1496]: time="2025-02-13T19:34:25.897502355Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:25.899267 systemd[1]: run-netns-cni\x2df2d42205\x2d0db7\x2d9392\x2d4255\x2d077e84a05047.mount: Deactivated successfully. Feb 13 19:34:25.899631 containerd[1496]: time="2025-02-13T19:34:25.899436342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:25.939738 containerd[1496]: time="2025-02-13T19:34:25.939675866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:0,}" Feb 13 19:34:26.019741 containerd[1496]: time="2025-02-13T19:34:26.019653252Z" level=error msg="Failed to destroy network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.020226 containerd[1496]: time="2025-02-13T19:34:26.020183546Z" level=error msg="encountered an error cleaning up failed sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.020302 containerd[1496]: time="2025-02-13T19:34:26.020263336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.020609 kubelet[1803]: E0213 19:34:26.020560 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.020699 kubelet[1803]: E0213 19:34:26.020640 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:26.020699 kubelet[1803]: E0213 19:34:26.020672 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:26.020792 kubelet[1803]: E0213 19:34:26.020745 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:26.024854 containerd[1496]: time="2025-02-13T19:34:26.024699686Z" level=error msg="Failed to destroy network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.025208 containerd[1496]: time="2025-02-13T19:34:26.025177592Z" level=error msg="encountered an error cleaning up failed sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.025290 containerd[1496]: time="2025-02-13T19:34:26.025252713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.025715 kubelet[1803]: E0213 19:34:26.025673 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:26.025810 kubelet[1803]: E0213 19:34:26.025722 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:26.025810 kubelet[1803]: E0213 19:34:26.025760 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:26.025906 kubelet[1803]: E0213 19:34:26.025812 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:26.773894 kubelet[1803]: E0213 19:34:26.773833 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:26.901573 kubelet[1803]: I0213 19:34:26.901504 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca" Feb 13 19:34:26.902575 containerd[1496]: time="2025-02-13T19:34:26.902337259Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:26.904737 containerd[1496]: time="2025-02-13T19:34:26.902596495Z" level=info msg="Ensure that sandbox 40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca in task-service has been cleanup successfully" Feb 13 19:34:26.904278 systemd[1]: run-netns-cni\x2d83d40614\x2d410d\x2da638\x2dc377\x2da27dbf580060.mount: Deactivated successfully. Feb 13 19:34:26.905422 kubelet[1803]: I0213 19:34:26.905062 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510" Feb 13 19:34:26.905622 containerd[1496]: time="2025-02-13T19:34:26.905597453Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:26.905664 containerd[1496]: time="2025-02-13T19:34:26.905621118Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:26.907619 containerd[1496]: time="2025-02-13T19:34:26.907555185Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:26.908727 containerd[1496]: time="2025-02-13T19:34:26.908685555Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:26.908810 containerd[1496]: time="2025-02-13T19:34:26.908789189Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:26.908810 containerd[1496]: time="2025-02-13T19:34:26.908802554Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:26.909231 containerd[1496]: time="2025-02-13T19:34:26.909204658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:26.911607 containerd[1496]: time="2025-02-13T19:34:26.911568050Z" level=info msg="Ensure that sandbox 2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510 in task-service has been cleanup successfully" Feb 13 19:34:26.912010 containerd[1496]: time="2025-02-13T19:34:26.911853636Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:26.912010 containerd[1496]: time="2025-02-13T19:34:26.911879585Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:26.913201 systemd[1]: run-netns-cni\x2d0a5240fe\x2d5c5e\x2d030e\x2dd048\x2dcad1c767039e.mount: Deactivated successfully. Feb 13 19:34:26.914072 containerd[1496]: time="2025-02-13T19:34:26.914042401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:1,}" Feb 13 19:34:27.068047 containerd[1496]: time="2025-02-13T19:34:27.067319169Z" level=error msg="Failed to destroy network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.085227 containerd[1496]: time="2025-02-13T19:34:27.085136924Z" level=error msg="encountered an error cleaning up failed sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.085415 containerd[1496]: time="2025-02-13T19:34:27.085260846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.085737 kubelet[1803]: E0213 19:34:27.085625 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.085789 kubelet[1803]: E0213 19:34:27.085734 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:27.085789 kubelet[1803]: E0213 19:34:27.085764 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:27.085930 kubelet[1803]: E0213 19:34:27.085824 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:27.089262 containerd[1496]: time="2025-02-13T19:34:27.089203660Z" level=error msg="Failed to destroy network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.089796 containerd[1496]: time="2025-02-13T19:34:27.089761837Z" level=error msg="encountered an error cleaning up failed sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.089826 containerd[1496]: time="2025-02-13T19:34:27.089809607Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.090172 kubelet[1803]: E0213 19:34:27.090102 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:27.090172 kubelet[1803]: E0213 19:34:27.090161 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:27.090172 kubelet[1803]: E0213 19:34:27.090182 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:27.090393 kubelet[1803]: E0213 19:34:27.090229 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:27.371173 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5-shm.mount: Deactivated successfully. Feb 13 19:34:27.758599 kubelet[1803]: E0213 19:34:27.758398 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:27.775089 kubelet[1803]: E0213 19:34:27.775000 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:27.909573 kubelet[1803]: I0213 19:34:27.909524 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5" Feb 13 19:34:27.910480 containerd[1496]: time="2025-02-13T19:34:27.910391250Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:34:27.911050 containerd[1496]: time="2025-02-13T19:34:27.910763639Z" level=info msg="Ensure that sandbox dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5 in task-service has been cleanup successfully" Feb 13 19:34:27.913552 kubelet[1803]: I0213 19:34:27.911331 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0" Feb 13 19:34:27.913187 systemd[1]: run-netns-cni\x2d37a6fd62\x2da1f4\x2d6439\x2d2bcc\x2d5d760532b26e.mount: Deactivated successfully. Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.911748385Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.911776198Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912094344Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912199632Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912213087Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912284621Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912497220Z" level=info msg="Ensure that sandbox 9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0 in task-service has been cleanup successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912604721Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912709328Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912722723Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912731179Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.912742299Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.913144664Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.913263216Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.913276411Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.913385796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:27.914090 containerd[1496]: time="2025-02-13T19:34:27.914026117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:2,}" Feb 13 19:34:27.917317 systemd[1]: run-netns-cni\x2d204690ff\x2d5989\x2da1fa\x2d3862\x2d323f796578c1.mount: Deactivated successfully. Feb 13 19:34:28.776023 kubelet[1803]: E0213 19:34:28.775937 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:29.593787 containerd[1496]: time="2025-02-13T19:34:29.593627569Z" level=error msg="Failed to destroy network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.594580 containerd[1496]: time="2025-02-13T19:34:29.594123690Z" level=error msg="encountered an error cleaning up failed sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.594580 containerd[1496]: time="2025-02-13T19:34:29.594194442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.594733 kubelet[1803]: E0213 19:34:29.594591 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.594733 kubelet[1803]: E0213 19:34:29.594685 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:29.594733 kubelet[1803]: E0213 19:34:29.594717 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:29.594895 kubelet[1803]: E0213 19:34:29.594764 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:29.615206 containerd[1496]: time="2025-02-13T19:34:29.614932844Z" level=error msg="Failed to destroy network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.615427 containerd[1496]: time="2025-02-13T19:34:29.615395121Z" level=error msg="encountered an error cleaning up failed sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.615512 containerd[1496]: time="2025-02-13T19:34:29.615483236Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.615838 kubelet[1803]: E0213 19:34:29.615759 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:29.615838 kubelet[1803]: E0213 19:34:29.615842 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:29.616181 kubelet[1803]: E0213 19:34:29.615871 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:29.616181 kubelet[1803]: E0213 19:34:29.615925 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:29.776502 kubelet[1803]: E0213 19:34:29.776427 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:29.917267 kubelet[1803]: I0213 19:34:29.917156 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e" Feb 13 19:34:29.918040 containerd[1496]: time="2025-02-13T19:34:29.917944670Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:34:29.918297 containerd[1496]: time="2025-02-13T19:34:29.918251204Z" level=info msg="Ensure that sandbox 61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e in task-service has been cleanup successfully" Feb 13 19:34:29.918832 containerd[1496]: time="2025-02-13T19:34:29.918679979Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:34:29.918832 containerd[1496]: time="2025-02-13T19:34:29.918704715Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:34:29.919154 containerd[1496]: time="2025-02-13T19:34:29.919107560Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:34:29.919254 containerd[1496]: time="2025-02-13T19:34:29.919222536Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:34:29.919254 containerd[1496]: time="2025-02-13T19:34:29.919246681Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:34:29.920002 containerd[1496]: time="2025-02-13T19:34:29.919754123Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:29.920002 containerd[1496]: time="2025-02-13T19:34:29.919852277Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:29.920002 containerd[1496]: time="2025-02-13T19:34:29.919865792Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:29.921115 containerd[1496]: time="2025-02-13T19:34:29.920680440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:3,}" Feb 13 19:34:29.921183 kubelet[1803]: I0213 19:34:29.920930 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d" Feb 13 19:34:29.921535 containerd[1496]: time="2025-02-13T19:34:29.921509164Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:34:29.924228 containerd[1496]: time="2025-02-13T19:34:29.924195642Z" level=info msg="Ensure that sandbox 4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d in task-service has been cleanup successfully" Feb 13 19:34:29.924375 containerd[1496]: time="2025-02-13T19:34:29.924354210Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:34:29.924375 containerd[1496]: time="2025-02-13T19:34:29.924369559Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:34:29.924852 containerd[1496]: time="2025-02-13T19:34:29.924686122Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:34:29.924852 containerd[1496]: time="2025-02-13T19:34:29.924779317Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:34:29.924852 containerd[1496]: time="2025-02-13T19:34:29.924791189Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:34:29.925234 containerd[1496]: time="2025-02-13T19:34:29.925211698Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:29.925326 containerd[1496]: time="2025-02-13T19:34:29.925304492Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:29.925354 containerd[1496]: time="2025-02-13T19:34:29.925324489Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:29.925820 containerd[1496]: time="2025-02-13T19:34:29.925790734Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:29.926111 containerd[1496]: time="2025-02-13T19:34:29.926047445Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:29.926111 containerd[1496]: time="2025-02-13T19:34:29.926065900Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:29.926515 containerd[1496]: time="2025-02-13T19:34:29.926487881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:30.071870 containerd[1496]: time="2025-02-13T19:34:30.071400113Z" level=error msg="Failed to destroy network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.071870 containerd[1496]: time="2025-02-13T19:34:30.071823868Z" level=error msg="encountered an error cleaning up failed sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.072098 containerd[1496]: time="2025-02-13T19:34:30.071897807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.072313 kubelet[1803]: E0213 19:34:30.072252 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.072395 kubelet[1803]: E0213 19:34:30.072347 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:30.072395 kubelet[1803]: E0213 19:34:30.072377 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:30.072470 kubelet[1803]: E0213 19:34:30.072441 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:30.074620 containerd[1496]: time="2025-02-13T19:34:30.074576300Z" level=error msg="Failed to destroy network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.077986 containerd[1496]: time="2025-02-13T19:34:30.075176676Z" level=error msg="encountered an error cleaning up failed sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.077986 containerd[1496]: time="2025-02-13T19:34:30.075264350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.078089 kubelet[1803]: E0213 19:34:30.075532 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.078089 kubelet[1803]: E0213 19:34:30.075596 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:30.078089 kubelet[1803]: E0213 19:34:30.075620 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:30.078178 kubelet[1803]: E0213 19:34:30.075675 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:30.113861 systemd[1]: run-netns-cni\x2d216c39b7\x2d8cdc\x2dba33\x2df236\x2df2a0f3c1b201.mount: Deactivated successfully. Feb 13 19:34:30.114014 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e-shm.mount: Deactivated successfully. Feb 13 19:34:30.114100 systemd[1]: run-netns-cni\x2d883d431d\x2d80c6\x2d724e\x2d108b\x2d8b914aa899d6.mount: Deactivated successfully. Feb 13 19:34:30.114175 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d-shm.mount: Deactivated successfully. Feb 13 19:34:30.776822 kubelet[1803]: E0213 19:34:30.776761 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:30.928753 kubelet[1803]: I0213 19:34:30.928446 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c" Feb 13 19:34:30.929204 containerd[1496]: time="2025-02-13T19:34:30.929173925Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:34:30.930050 containerd[1496]: time="2025-02-13T19:34:30.929770775Z" level=info msg="Ensure that sandbox 593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c in task-service has been cleanup successfully" Feb 13 19:34:30.930050 containerd[1496]: time="2025-02-13T19:34:30.929990377Z" level=info msg="TearDown network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" successfully" Feb 13 19:34:30.930050 containerd[1496]: time="2025-02-13T19:34:30.930009492Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" returns successfully" Feb 13 19:34:30.930799 containerd[1496]: time="2025-02-13T19:34:30.930778424Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:34:30.931078 containerd[1496]: time="2025-02-13T19:34:30.930938535Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:34:30.931078 containerd[1496]: time="2025-02-13T19:34:30.930955977Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:34:30.932412 containerd[1496]: time="2025-02-13T19:34:30.932387201Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:34:30.932425 systemd[1]: run-netns-cni\x2da897aa3e\x2dd701\x2d99d2\x2df9cb\x2d673ec664b24c.mount: Deactivated successfully. Feb 13 19:34:30.932801 containerd[1496]: time="2025-02-13T19:34:30.932667627Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:34:30.932801 containerd[1496]: time="2025-02-13T19:34:30.932737448Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:34:30.933076 containerd[1496]: time="2025-02-13T19:34:30.933054132Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:30.933345 containerd[1496]: time="2025-02-13T19:34:30.933275417Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:30.933345 containerd[1496]: time="2025-02-13T19:34:30.933300775Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:30.933777 containerd[1496]: time="2025-02-13T19:34:30.933716044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:4,}" Feb 13 19:34:30.934140 kubelet[1803]: I0213 19:34:30.934111 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016" Feb 13 19:34:30.934632 containerd[1496]: time="2025-02-13T19:34:30.934544617Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:34:30.934883 containerd[1496]: time="2025-02-13T19:34:30.934830203Z" level=info msg="Ensure that sandbox 6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016 in task-service has been cleanup successfully" Feb 13 19:34:30.935193 containerd[1496]: time="2025-02-13T19:34:30.935122631Z" level=info msg="TearDown network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" successfully" Feb 13 19:34:30.935193 containerd[1496]: time="2025-02-13T19:34:30.935147869Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" returns successfully" Feb 13 19:34:30.935527 containerd[1496]: time="2025-02-13T19:34:30.935505399Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:34:30.935600 containerd[1496]: time="2025-02-13T19:34:30.935583245Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:34:30.935641 containerd[1496]: time="2025-02-13T19:34:30.935598925Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:34:30.936298 containerd[1496]: time="2025-02-13T19:34:30.936277086Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:34:30.936520 containerd[1496]: time="2025-02-13T19:34:30.936450522Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:34:30.936520 containerd[1496]: time="2025-02-13T19:34:30.936488914Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:34:30.936823 containerd[1496]: time="2025-02-13T19:34:30.936798254Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:30.936947 containerd[1496]: time="2025-02-13T19:34:30.936912518Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:30.936947 containerd[1496]: time="2025-02-13T19:34:30.936945239Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:30.937292 containerd[1496]: time="2025-02-13T19:34:30.937246314Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:30.937425 containerd[1496]: time="2025-02-13T19:34:30.937387439Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:30.937425 containerd[1496]: time="2025-02-13T19:34:30.937405573Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:30.937802 containerd[1496]: time="2025-02-13T19:34:30.937781898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:30.937941 systemd[1]: run-netns-cni\x2d5e798056\x2d8821\x2d4fcc\x2da095\x2d237b14c14b30.mount: Deactivated successfully. Feb 13 19:34:31.703942 containerd[1496]: time="2025-02-13T19:34:31.703872201Z" level=error msg="Failed to destroy network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.706352 containerd[1496]: time="2025-02-13T19:34:31.706183325Z" level=error msg="encountered an error cleaning up failed sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.706352 containerd[1496]: time="2025-02-13T19:34:31.706249540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.706520 kubelet[1803]: E0213 19:34:31.706486 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.706595 kubelet[1803]: E0213 19:34:31.706555 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:31.706595 kubelet[1803]: E0213 19:34:31.706576 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:31.706671 kubelet[1803]: E0213 19:34:31.706624 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:31.706838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d-shm.mount: Deactivated successfully. Feb 13 19:34:31.713634 containerd[1496]: time="2025-02-13T19:34:31.713567764Z" level=error msg="Failed to destroy network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.714148 containerd[1496]: time="2025-02-13T19:34:31.714098850Z" level=error msg="encountered an error cleaning up failed sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.714200 containerd[1496]: time="2025-02-13T19:34:31.714176035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.716004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30-shm.mount: Deactivated successfully. Feb 13 19:34:31.716138 kubelet[1803]: E0213 19:34:31.715990 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.716138 kubelet[1803]: E0213 19:34:31.716065 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:31.716138 kubelet[1803]: E0213 19:34:31.716093 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:31.716270 kubelet[1803]: E0213 19:34:31.716147 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:31.777499 kubelet[1803]: E0213 19:34:31.777406 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:31.828123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022197748.mount: Deactivated successfully. Feb 13 19:34:31.874959 containerd[1496]: time="2025-02-13T19:34:31.874900761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.875795 containerd[1496]: time="2025-02-13T19:34:31.875731188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:34:31.876825 containerd[1496]: time="2025-02-13T19:34:31.876777771Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.878916 containerd[1496]: time="2025-02-13T19:34:31.878874182Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:31.879530 containerd[1496]: time="2025-02-13T19:34:31.879492171Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.984820862s" Feb 13 19:34:31.879576 containerd[1496]: time="2025-02-13T19:34:31.879528590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:34:31.888674 containerd[1496]: time="2025-02-13T19:34:31.888636460Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:34:31.905809 containerd[1496]: time="2025-02-13T19:34:31.905745416Z" level=info msg="CreateContainer within sandbox \"6dd2145c463e84e4871ee26f2443ff1d65501c6ec43c231fd0773b73d7112849\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f5d5a2c05d3790cf0f6ff043005385642dd1470a4380b2e5c994e10a8601ec9a\"" Feb 13 19:34:31.906387 containerd[1496]: time="2025-02-13T19:34:31.906356291Z" level=info msg="StartContainer for \"f5d5a2c05d3790cf0f6ff043005385642dd1470a4380b2e5c994e10a8601ec9a\"" Feb 13 19:34:31.937455 kubelet[1803]: I0213 19:34:31.937410 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d" Feb 13 19:34:31.938100 containerd[1496]: time="2025-02-13T19:34:31.938063103Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" Feb 13 19:34:31.938386 containerd[1496]: time="2025-02-13T19:34:31.938280360Z" level=info msg="Ensure that sandbox a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d in task-service has been cleanup successfully" Feb 13 19:34:31.938123 systemd[1]: Started cri-containerd-f5d5a2c05d3790cf0f6ff043005385642dd1470a4380b2e5c994e10a8601ec9a.scope - libcontainer container f5d5a2c05d3790cf0f6ff043005385642dd1470a4380b2e5c994e10a8601ec9a. Feb 13 19:34:31.938587 containerd[1496]: time="2025-02-13T19:34:31.938502196Z" level=info msg="TearDown network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" successfully" Feb 13 19:34:31.938587 containerd[1496]: time="2025-02-13T19:34:31.938527203Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" returns successfully" Feb 13 19:34:31.938906 containerd[1496]: time="2025-02-13T19:34:31.938799684Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:34:31.938906 containerd[1496]: time="2025-02-13T19:34:31.938899081Z" level=info msg="TearDown network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" successfully" Feb 13 19:34:31.938986 containerd[1496]: time="2025-02-13T19:34:31.938913548Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" returns successfully" Feb 13 19:34:31.939302 containerd[1496]: time="2025-02-13T19:34:31.939260017Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:34:31.939412 containerd[1496]: time="2025-02-13T19:34:31.939388949Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:34:31.939472 containerd[1496]: time="2025-02-13T19:34:31.939411111Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:34:31.939834 containerd[1496]: time="2025-02-13T19:34:31.939806953Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:34:31.939943 containerd[1496]: time="2025-02-13T19:34:31.939920666Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:34:31.939943 containerd[1496]: time="2025-02-13T19:34:31.939938901Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:34:31.940324 containerd[1496]: time="2025-02-13T19:34:31.940299717Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:31.940549 containerd[1496]: time="2025-02-13T19:34:31.940527104Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:31.940646 containerd[1496]: time="2025-02-13T19:34:31.940615850Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:31.941671 containerd[1496]: time="2025-02-13T19:34:31.941569328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:5,}" Feb 13 19:34:31.944716 kubelet[1803]: I0213 19:34:31.944677 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30" Feb 13 19:34:31.945427 containerd[1496]: time="2025-02-13T19:34:31.945398579Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" Feb 13 19:34:31.945680 containerd[1496]: time="2025-02-13T19:34:31.945646434Z" level=info msg="Ensure that sandbox 9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30 in task-service has been cleanup successfully" Feb 13 19:34:31.945990 containerd[1496]: time="2025-02-13T19:34:31.945938201Z" level=info msg="TearDown network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" successfully" Feb 13 19:34:31.945990 containerd[1496]: time="2025-02-13T19:34:31.945961325Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" returns successfully" Feb 13 19:34:31.946371 containerd[1496]: time="2025-02-13T19:34:31.946318865Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:34:31.946560 containerd[1496]: time="2025-02-13T19:34:31.946540280Z" level=info msg="TearDown network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" successfully" Feb 13 19:34:31.946649 containerd[1496]: time="2025-02-13T19:34:31.946634427Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" returns successfully" Feb 13 19:34:31.946936 containerd[1496]: time="2025-02-13T19:34:31.946898763Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:34:31.947044 containerd[1496]: time="2025-02-13T19:34:31.947019219Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:34:31.947044 containerd[1496]: time="2025-02-13T19:34:31.947041811Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:34:31.947514 containerd[1496]: time="2025-02-13T19:34:31.947490082Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:34:31.947611 containerd[1496]: time="2025-02-13T19:34:31.947581183Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:34:31.947611 containerd[1496]: time="2025-02-13T19:34:31.947606360Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:34:31.947924 containerd[1496]: time="2025-02-13T19:34:31.947898738Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:31.948031 containerd[1496]: time="2025-02-13T19:34:31.948009406Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:31.948031 containerd[1496]: time="2025-02-13T19:34:31.948027870Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:31.948571 containerd[1496]: time="2025-02-13T19:34:31.948532667Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:31.948705 containerd[1496]: time="2025-02-13T19:34:31.948680334Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:31.948705 containerd[1496]: time="2025-02-13T19:34:31.948698929Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:31.949134 containerd[1496]: time="2025-02-13T19:34:31.949110370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:6,}" Feb 13 19:34:32.005448 containerd[1496]: time="2025-02-13T19:34:32.005311147Z" level=info msg="StartContainer for \"f5d5a2c05d3790cf0f6ff043005385642dd1470a4380b2e5c994e10a8601ec9a\" returns successfully" Feb 13 19:34:32.083625 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:34:32.083861 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:34:32.101558 containerd[1496]: time="2025-02-13T19:34:32.101492992Z" level=error msg="Failed to destroy network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.102227 containerd[1496]: time="2025-02-13T19:34:32.102190811Z" level=error msg="encountered an error cleaning up failed sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.102392 containerd[1496]: time="2025-02-13T19:34:32.102360870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.103050 kubelet[1803]: E0213 19:34:32.103009 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.103119 kubelet[1803]: E0213 19:34:32.103073 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:32.103119 kubelet[1803]: E0213 19:34:32.103101 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-vjtzw" Feb 13 19:34:32.103201 kubelet[1803]: E0213 19:34:32.103141 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-vjtzw_default(f53fadde-9cb7-4b42-88db-555315723510)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-vjtzw" podUID="f53fadde-9cb7-4b42-88db-555315723510" Feb 13 19:34:32.111361 containerd[1496]: time="2025-02-13T19:34:32.111305945Z" level=error msg="Failed to destroy network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.111734 containerd[1496]: time="2025-02-13T19:34:32.111699854Z" level=error msg="encountered an error cleaning up failed sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.111788 containerd[1496]: time="2025-02-13T19:34:32.111761459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.112037 kubelet[1803]: E0213 19:34:32.111995 1803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.112111 kubelet[1803]: E0213 19:34:32.112065 1803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:32.112111 kubelet[1803]: E0213 19:34:32.112100 1803 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6zx89" Feb 13 19:34:32.112190 kubelet[1803]: E0213 19:34:32.112157 1803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6zx89_calico-system(eabcb63e-3a6f-48e9-b953-604013f3f97d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6zx89" podUID="eabcb63e-3a6f-48e9-b953-604013f3f97d" Feb 13 19:34:32.607002 systemd[1]: run-netns-cni\x2d372c61b1\x2d0c60\x2de674\x2d0b7c\x2db1374fb18b0f.mount: Deactivated successfully. Feb 13 19:34:32.607128 systemd[1]: run-netns-cni\x2de8dd27dc\x2d9c68\x2d1a2e\x2decd3\x2d7f34e3c9b17c.mount: Deactivated successfully. Feb 13 19:34:32.777684 kubelet[1803]: E0213 19:34:32.777591 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:32.949172 kubelet[1803]: E0213 19:34:32.949135 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:32.952380 kubelet[1803]: I0213 19:34:32.951544 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2" Feb 13 19:34:32.952460 containerd[1496]: time="2025-02-13T19:34:32.952057594Z" level=info msg="StopPodSandbox for \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\"" Feb 13 19:34:32.952460 containerd[1496]: time="2025-02-13T19:34:32.952225660Z" level=info msg="Ensure that sandbox d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2 in task-service has been cleanup successfully" Feb 13 19:34:32.954012 containerd[1496]: time="2025-02-13T19:34:32.953779113Z" level=info msg="TearDown network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" successfully" Feb 13 19:34:32.954012 containerd[1496]: time="2025-02-13T19:34:32.953797628Z" level=info msg="StopPodSandbox for \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" returns successfully" Feb 13 19:34:32.954099 containerd[1496]: time="2025-02-13T19:34:32.954064869Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" Feb 13 19:34:32.954201 containerd[1496]: time="2025-02-13T19:34:32.954166119Z" level=info msg="TearDown network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" successfully" Feb 13 19:34:32.954201 containerd[1496]: time="2025-02-13T19:34:32.954185976Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" returns successfully" Feb 13 19:34:32.955102 systemd[1]: run-netns-cni\x2d6d4ef68c\x2d195f\x2d2c57\x2d63b6\x2de8c5940d8ead.mount: Deactivated successfully. Feb 13 19:34:32.956401 containerd[1496]: time="2025-02-13T19:34:32.956312344Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:34:32.956510 containerd[1496]: time="2025-02-13T19:34:32.956419565Z" level=info msg="TearDown network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" successfully" Feb 13 19:34:32.956510 containerd[1496]: time="2025-02-13T19:34:32.956435364Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" returns successfully" Feb 13 19:34:32.956852 containerd[1496]: time="2025-02-13T19:34:32.956814936Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:34:32.956939 containerd[1496]: time="2025-02-13T19:34:32.956922448Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:34:32.956983 containerd[1496]: time="2025-02-13T19:34:32.956938338Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:34:32.957309 containerd[1496]: time="2025-02-13T19:34:32.957222140Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:34:32.957361 containerd[1496]: time="2025-02-13T19:34:32.957339760Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:34:32.957361 containerd[1496]: time="2025-02-13T19:34:32.957356121Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:34:32.957722 containerd[1496]: time="2025-02-13T19:34:32.957594929Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:34:32.957722 containerd[1496]: time="2025-02-13T19:34:32.957674328Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:34:32.957722 containerd[1496]: time="2025-02-13T19:34:32.957683956Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:34:32.957876 kubelet[1803]: I0213 19:34:32.957851 1803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26" Feb 13 19:34:32.958553 containerd[1496]: time="2025-02-13T19:34:32.958325098Z" level=info msg="StopPodSandbox for \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\"" Feb 13 19:34:32.958553 containerd[1496]: time="2025-02-13T19:34:32.958368730Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:34:32.958553 containerd[1496]: time="2025-02-13T19:34:32.958486090Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:34:32.958553 containerd[1496]: time="2025-02-13T19:34:32.958495357Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:34:32.958553 containerd[1496]: time="2025-02-13T19:34:32.958525634Z" level=info msg="Ensure that sandbox 67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26 in task-service has been cleanup successfully" Feb 13 19:34:32.958906 containerd[1496]: time="2025-02-13T19:34:32.958784790Z" level=info msg="TearDown network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" successfully" Feb 13 19:34:32.958906 containerd[1496]: time="2025-02-13T19:34:32.958804106Z" level=info msg="StopPodSandbox for \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" returns successfully" Feb 13 19:34:32.959184 containerd[1496]: time="2025-02-13T19:34:32.959164572Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" Feb 13 19:34:32.959279 containerd[1496]: time="2025-02-13T19:34:32.959247979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:7,}" Feb 13 19:34:32.959368 containerd[1496]: time="2025-02-13T19:34:32.959352425Z" level=info msg="TearDown network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" successfully" Feb 13 19:34:32.959420 containerd[1496]: time="2025-02-13T19:34:32.959407588Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" returns successfully" Feb 13 19:34:32.959885 containerd[1496]: time="2025-02-13T19:34:32.959865607Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:34:32.960062 containerd[1496]: time="2025-02-13T19:34:32.960045254Z" level=info msg="TearDown network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" successfully" Feb 13 19:34:32.960161 containerd[1496]: time="2025-02-13T19:34:32.960144240Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" returns successfully" Feb 13 19:34:32.960595 containerd[1496]: time="2025-02-13T19:34:32.960563596Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:34:32.960680 containerd[1496]: time="2025-02-13T19:34:32.960647323Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:34:32.960680 containerd[1496]: time="2025-02-13T19:34:32.960656220Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:34:32.960907 containerd[1496]: time="2025-02-13T19:34:32.960887734Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:34:32.961021 containerd[1496]: time="2025-02-13T19:34:32.961005525Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:34:32.961021 containerd[1496]: time="2025-02-13T19:34:32.961019381Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:34:32.961229 containerd[1496]: time="2025-02-13T19:34:32.961209357Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:34:32.961297 containerd[1496]: time="2025-02-13T19:34:32.961281462Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:34:32.961297 containerd[1496]: time="2025-02-13T19:34:32.961293204Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:34:32.961637 containerd[1496]: time="2025-02-13T19:34:32.961615028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:6,}" Feb 13 19:34:32.963334 systemd[1]: run-netns-cni\x2d32276b16\x2dc8be\x2dae7d\x2d838b\x2d125fc4a90f0f.mount: Deactivated successfully. Feb 13 19:34:33.002086 kubelet[1803]: I0213 19:34:33.002000 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h9nw2" podStartSLOduration=4.06608158 podStartE2EDuration="25.001952601s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="2025-02-13 19:34:10.944304942 +0000 UTC m=+3.577794894" lastFinishedPulling="2025-02-13 19:34:31.880175964 +0000 UTC m=+24.513665915" observedRunningTime="2025-02-13 19:34:33.001648129 +0000 UTC m=+25.635138090" watchObservedRunningTime="2025-02-13 19:34:33.001952601 +0000 UTC m=+25.635442552" Feb 13 19:34:33.129828 systemd-networkd[1440]: calib7e745a69e3: Link UP Feb 13 19:34:33.130251 systemd-networkd[1440]: calib7e745a69e3: Gained carrier Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.048 [INFO][2955] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.060 [INFO][2955] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.147-k8s-csi--node--driver--6zx89-eth0 csi-node-driver- calico-system eabcb63e-3a6f-48e9-b953-604013f3f97d 818 0 2025-02-13 19:34:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.147 csi-node-driver-6zx89 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib7e745a69e3 [] []}} ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.060 [INFO][2955] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.086 [INFO][2983] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" HandleID="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Workload="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.095 [INFO][2983] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" HandleID="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Workload="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddd10), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.147", "pod":"csi-node-driver-6zx89", "timestamp":"2025-02-13 19:34:33.086527058 +0000 UTC"}, Hostname:"10.0.0.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.095 [INFO][2983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.095 [INFO][2983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.096 [INFO][2983] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.147' Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.097 [INFO][2983] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.101 [INFO][2983] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.105 [INFO][2983] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.106 [INFO][2983] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.108 [INFO][2983] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.108 [INFO][2983] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.109 [INFO][2983] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722 Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.114 [INFO][2983] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2983] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.65/26] block=192.168.124.64/26 handle="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2983] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.65/26] handle="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" host="10.0.0.147" Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:33.143325 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2983] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.65/26] IPv6=[] ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" HandleID="k8s-pod-network.211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Workload="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.122 [INFO][2955] cni-plugin/k8s.go 386: Populated endpoint ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-csi--node--driver--6zx89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eabcb63e-3a6f-48e9-b953-604013f3f97d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"", Pod:"csi-node-driver-6zx89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7e745a69e3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.122 [INFO][2955] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.65/32] ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.122 [INFO][2955] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7e745a69e3 ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.131 [INFO][2955] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.131 [INFO][2955] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-csi--node--driver--6zx89-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"eabcb63e-3a6f-48e9-b953-604013f3f97d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722", Pod:"csi-node-driver-6zx89", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.124.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7e745a69e3", MAC:"6a:2b:28:b9:74:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:33.143895 containerd[1496]: 2025-02-13 19:34:33.141 [INFO][2955] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722" Namespace="calico-system" Pod="csi-node-driver-6zx89" WorkloadEndpoint="10.0.0.147-k8s-csi--node--driver--6zx89-eth0" Feb 13 19:34:33.167915 containerd[1496]: time="2025-02-13T19:34:33.167760396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:33.167915 containerd[1496]: time="2025-02-13T19:34:33.167865016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:33.168133 containerd[1496]: time="2025-02-13T19:34:33.167923016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:33.168133 containerd[1496]: time="2025-02-13T19:34:33.168053777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:33.198277 systemd[1]: Started cri-containerd-211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722.scope - libcontainer container 211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722. Feb 13 19:34:33.211851 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:33.223955 containerd[1496]: time="2025-02-13T19:34:33.223839339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6zx89,Uid:eabcb63e-3a6f-48e9-b953-604013f3f97d,Namespace:calico-system,Attempt:7,} returns sandbox id \"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722\"" Feb 13 19:34:33.225610 containerd[1496]: time="2025-02-13T19:34:33.225571180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:34:33.234157 systemd-networkd[1440]: caliea97708010e: Link UP Feb 13 19:34:33.234669 systemd-networkd[1440]: caliea97708010e: Gained carrier Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.050 [INFO][2965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.060 [INFO][2965] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0 nginx-deployment-7fcdb87857- default f53fadde-9cb7-4b42-88db-555315723510 1095 0 2025-02-13 19:34:25 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.147 nginx-deployment-7fcdb87857-vjtzw eth0 default [] [] [kns.default ksa.default.default] caliea97708010e [] []}} ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.060 [INFO][2965] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.090 [INFO][2982] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" HandleID="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Workload="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.097 [INFO][2982] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" HandleID="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Workload="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df2d0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.147", "pod":"nginx-deployment-7fcdb87857-vjtzw", "timestamp":"2025-02-13 19:34:33.090721419 +0000 UTC"}, Hostname:"10.0.0.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.097 [INFO][2982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.118 [INFO][2982] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.147' Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.199 [INFO][2982] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.205 [INFO][2982] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.210 [INFO][2982] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.212 [INFO][2982] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.215 [INFO][2982] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.215 [INFO][2982] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.217 [INFO][2982] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43 Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.223 [INFO][2982] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.229 [INFO][2982] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.66/26] block=192.168.124.64/26 handle="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.229 [INFO][2982] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.66/26] handle="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" host="10.0.0.147" Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.229 [INFO][2982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:33.244720 containerd[1496]: 2025-02-13 19:34:33.229 [INFO][2982] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.66/26] IPv6=[] ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" HandleID="k8s-pod-network.200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Workload="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.232 [INFO][2965] cni-plugin/k8s.go 386: Populated endpoint ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"f53fadde-9cb7-4b42-88db-555315723510", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-vjtzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliea97708010e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.232 [INFO][2965] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.66/32] ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.232 [INFO][2965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea97708010e ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.234 [INFO][2965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.235 [INFO][2965] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"f53fadde-9cb7-4b42-88db-555315723510", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43", Pod:"nginx-deployment-7fcdb87857-vjtzw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliea97708010e", MAC:"62:15:76:c8:21:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:33.245255 containerd[1496]: 2025-02-13 19:34:33.242 [INFO][2965] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43" Namespace="default" Pod="nginx-deployment-7fcdb87857-vjtzw" WorkloadEndpoint="10.0.0.147-k8s-nginx--deployment--7fcdb87857--vjtzw-eth0" Feb 13 19:34:33.268810 containerd[1496]: time="2025-02-13T19:34:33.268676874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:33.268810 containerd[1496]: time="2025-02-13T19:34:33.268757398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:33.268810 containerd[1496]: time="2025-02-13T19:34:33.268772748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:33.269094 containerd[1496]: time="2025-02-13T19:34:33.268869552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:33.297127 systemd[1]: Started cri-containerd-200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43.scope - libcontainer container 200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43. Feb 13 19:34:33.310606 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:33.335120 containerd[1496]: time="2025-02-13T19:34:33.335071684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-vjtzw,Uid:f53fadde-9cb7-4b42-88db-555315723510,Namespace:default,Attempt:6,} returns sandbox id \"200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43\"" Feb 13 19:34:33.672019 kernel: bpftool[3228]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:34:33.778516 kubelet[1803]: E0213 19:34:33.778407 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:33.967605 kubelet[1803]: E0213 19:34:33.966133 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:33.995694 systemd-networkd[1440]: vxlan.calico: Link UP Feb 13 19:34:33.995714 systemd-networkd[1440]: vxlan.calico: Gained carrier Feb 13 19:34:34.779160 kubelet[1803]: E0213 19:34:34.779053 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:34.903272 systemd-networkd[1440]: calib7e745a69e3: Gained IPv6LL Feb 13 19:34:35.159712 systemd-networkd[1440]: caliea97708010e: Gained IPv6LL Feb 13 19:34:35.543264 systemd-networkd[1440]: vxlan.calico: Gained IPv6LL Feb 13 19:34:35.572487 containerd[1496]: time="2025-02-13T19:34:35.572397955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:35.586407 containerd[1496]: time="2025-02-13T19:34:35.586279088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:34:35.589109 containerd[1496]: time="2025-02-13T19:34:35.589022613Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:35.595224 containerd[1496]: time="2025-02-13T19:34:35.595156697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:35.596080 containerd[1496]: time="2025-02-13T19:34:35.596034711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.370413616s" Feb 13 19:34:35.596147 containerd[1496]: time="2025-02-13T19:34:35.596080750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:34:35.597324 containerd[1496]: time="2025-02-13T19:34:35.597278154Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:34:35.598599 containerd[1496]: time="2025-02-13T19:34:35.598564719Z" level=info msg="CreateContainer within sandbox \"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:34:35.653675 containerd[1496]: time="2025-02-13T19:34:35.653577185Z" level=info msg="CreateContainer within sandbox \"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b7da561f93023909422f6843930b04c42dc429c89a36fb23e81ab0cfa667ad9\"" Feb 13 19:34:35.654378 containerd[1496]: time="2025-02-13T19:34:35.654327878Z" level=info msg="StartContainer for \"8b7da561f93023909422f6843930b04c42dc429c89a36fb23e81ab0cfa667ad9\"" Feb 13 19:34:35.699261 systemd[1]: Started cri-containerd-8b7da561f93023909422f6843930b04c42dc429c89a36fb23e81ab0cfa667ad9.scope - libcontainer container 8b7da561f93023909422f6843930b04c42dc429c89a36fb23e81ab0cfa667ad9. Feb 13 19:34:35.779627 kubelet[1803]: E0213 19:34:35.779519 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:35.783177 containerd[1496]: time="2025-02-13T19:34:35.783102194Z" level=info msg="StartContainer for \"8b7da561f93023909422f6843930b04c42dc429c89a36fb23e81ab0cfa667ad9\" returns successfully" Feb 13 19:34:36.780040 kubelet[1803]: E0213 19:34:36.779940 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:37.781060 kubelet[1803]: E0213 19:34:37.780994 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:38.781668 kubelet[1803]: E0213 19:34:38.781600 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:39.782554 kubelet[1803]: E0213 19:34:39.782485 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:39.877844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2061949251.mount: Deactivated successfully. Feb 13 19:34:40.783786 kubelet[1803]: E0213 19:34:40.783661 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:41.784297 kubelet[1803]: E0213 19:34:41.784235 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:41.925052 containerd[1496]: time="2025-02-13T19:34:41.924960964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.925706 containerd[1496]: time="2025-02-13T19:34:41.925674819Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:34:41.926761 containerd[1496]: time="2025-02-13T19:34:41.926728739Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.929485 containerd[1496]: time="2025-02-13T19:34:41.929458909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:41.931186 containerd[1496]: time="2025-02-13T19:34:41.931139498Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.333818692s" Feb 13 19:34:41.931186 containerd[1496]: time="2025-02-13T19:34:41.931169445Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:34:41.932266 containerd[1496]: time="2025-02-13T19:34:41.932057740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:34:41.933636 containerd[1496]: time="2025-02-13T19:34:41.933584277Z" level=info msg="CreateContainer within sandbox \"200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:34:41.953024 containerd[1496]: time="2025-02-13T19:34:41.952953198Z" level=info msg="CreateContainer within sandbox \"200b68692f5a922fcc5cb9a797b4f71a75f3ed741fb95770db1eb35ec57e2d43\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1bb6e9ba09b3475ca80d941df188531a883a18f41e8471bff7b0828d5c906a4c\"" Feb 13 19:34:41.953779 containerd[1496]: time="2025-02-13T19:34:41.953705295Z" level=info msg="StartContainer for \"1bb6e9ba09b3475ca80d941df188531a883a18f41e8471bff7b0828d5c906a4c\"" Feb 13 19:34:42.036119 systemd[1]: Started cri-containerd-1bb6e9ba09b3475ca80d941df188531a883a18f41e8471bff7b0828d5c906a4c.scope - libcontainer container 1bb6e9ba09b3475ca80d941df188531a883a18f41e8471bff7b0828d5c906a4c. Feb 13 19:34:42.499787 containerd[1496]: time="2025-02-13T19:34:42.499712582Z" level=info msg="StartContainer for \"1bb6e9ba09b3475ca80d941df188531a883a18f41e8471bff7b0828d5c906a4c\" returns successfully" Feb 13 19:34:42.785571 kubelet[1803]: E0213 19:34:42.785316 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:43.007909 kubelet[1803]: I0213 19:34:43.007812 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-vjtzw" podStartSLOduration=9.411933993 podStartE2EDuration="18.007788806s" podCreationTimestamp="2025-02-13 19:34:25 +0000 UTC" firstStartedPulling="2025-02-13 19:34:33.336102046 +0000 UTC m=+25.969591997" lastFinishedPulling="2025-02-13 19:34:41.931956859 +0000 UTC m=+34.565446810" observedRunningTime="2025-02-13 19:34:43.007762155 +0000 UTC m=+35.641252106" watchObservedRunningTime="2025-02-13 19:34:43.007788806 +0000 UTC m=+35.641278757" Feb 13 19:34:43.785925 kubelet[1803]: E0213 19:34:43.785845 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:44.786931 kubelet[1803]: E0213 19:34:44.786864 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:45.788045 kubelet[1803]: E0213 19:34:45.787948 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:46.558787 containerd[1496]: time="2025-02-13T19:34:46.558712463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.559877 containerd[1496]: time="2025-02-13T19:34:46.559832551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:34:46.561175 containerd[1496]: time="2025-02-13T19:34:46.561138460Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.563891 containerd[1496]: time="2025-02-13T19:34:46.563850761Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.564480 containerd[1496]: time="2025-02-13T19:34:46.564452148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 4.632366845s" Feb 13 19:34:46.564526 containerd[1496]: time="2025-02-13T19:34:46.564483607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:34:46.566858 containerd[1496]: time="2025-02-13T19:34:46.566813714Z" level=info msg="CreateContainer within sandbox \"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:34:46.618225 containerd[1496]: time="2025-02-13T19:34:46.618119132Z" level=info msg="CreateContainer within sandbox \"211f6c0ab93a34cb2d5d04ce0a509c97daae58ad7fea1ea8abbd5edad0777722\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4\"" Feb 13 19:34:46.618909 containerd[1496]: time="2025-02-13T19:34:46.618846386Z" level=info msg="StartContainer for \"aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4\"" Feb 13 19:34:46.648869 systemd[1]: run-containerd-runc-k8s.io-aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4-runc.RfU2Gu.mount: Deactivated successfully. Feb 13 19:34:46.656239 systemd[1]: Started cri-containerd-aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4.scope - libcontainer container aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4. Feb 13 19:34:46.705480 update_engine[1486]: I20250213 19:34:46.705385 1486 update_attempter.cc:509] Updating boot flags... Feb 13 19:34:46.769660 containerd[1496]: time="2025-02-13T19:34:46.769585897Z" level=info msg="StartContainer for \"aa0c29df0edb3860d5ee0d290988034927a0d9c78ec0102e5536b09f852d87c4\" returns successfully" Feb 13 19:34:46.789171 kubelet[1803]: E0213 19:34:46.788757 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:46.792049 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3519) Feb 13 19:34:46.843014 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3521) Feb 13 19:34:46.930471 kubelet[1803]: I0213 19:34:46.930428 1803 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:34:46.930471 kubelet[1803]: I0213 19:34:46.930479 1803 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:34:47.038156 kubelet[1803]: I0213 19:34:47.038063 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6zx89" podStartSLOduration=25.698164226 podStartE2EDuration="39.038038969s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="2025-02-13 19:34:33.225307125 +0000 UTC m=+25.858797076" lastFinishedPulling="2025-02-13 19:34:46.565181868 +0000 UTC m=+39.198671819" observedRunningTime="2025-02-13 19:34:47.037712682 +0000 UTC m=+39.671202633" watchObservedRunningTime="2025-02-13 19:34:47.038038969 +0000 UTC m=+39.671528940" Feb 13 19:34:47.758572 kubelet[1803]: E0213 19:34:47.758484 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:47.789422 kubelet[1803]: E0213 19:34:47.789335 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:48.789560 kubelet[1803]: E0213 19:34:48.789505 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:48.804671 systemd[1]: Created slice kubepods-besteffort-poda9f7eab3_3eb1_4576_8c73_c7725488286e.slice - libcontainer container kubepods-besteffort-poda9f7eab3_3eb1_4576_8c73_c7725488286e.slice. Feb 13 19:34:48.902556 kubelet[1803]: I0213 19:34:48.902494 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7sjp\" (UniqueName: \"kubernetes.io/projected/a9f7eab3-3eb1-4576-8c73-c7725488286e-kube-api-access-w7sjp\") pod \"nfs-server-provisioner-0\" (UID: \"a9f7eab3-3eb1-4576-8c73-c7725488286e\") " pod="default/nfs-server-provisioner-0" Feb 13 19:34:48.902556 kubelet[1803]: I0213 19:34:48.902555 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a9f7eab3-3eb1-4576-8c73-c7725488286e-data\") pod \"nfs-server-provisioner-0\" (UID: \"a9f7eab3-3eb1-4576-8c73-c7725488286e\") " pod="default/nfs-server-provisioner-0" Feb 13 19:34:49.108185 containerd[1496]: time="2025-02-13T19:34:49.108043480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a9f7eab3-3eb1-4576-8c73-c7725488286e,Namespace:default,Attempt:0,}" Feb 13 19:34:49.418101 systemd-networkd[1440]: cali60e51b789ff: Link UP Feb 13 19:34:49.419099 systemd-networkd[1440]: cali60e51b789ff: Gained carrier Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.345 [INFO][3532] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.147-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default a9f7eab3-3eb1-4576-8c73-c7725488286e 1252 0 2025-02-13 19:34:48 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.147 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.345 [INFO][3532] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.375 [INFO][3546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" HandleID="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Workload="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.385 [INFO][3546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" HandleID="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Workload="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005025e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.147", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:34:49.375548211 +0000 UTC"}, Hostname:"10.0.0.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.385 [INFO][3546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.385 [INFO][3546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.385 [INFO][3546] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.147' Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.388 [INFO][3546] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.391 [INFO][3546] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.395 [INFO][3546] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.397 [INFO][3546] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.399 [INFO][3546] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.399 [INFO][3546] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.401 [INFO][3546] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.405 [INFO][3546] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.411 [INFO][3546] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.67/26] block=192.168.124.64/26 handle="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.411 [INFO][3546] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.67/26] handle="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" host="10.0.0.147" Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.411 [INFO][3546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:49.429582 containerd[1496]: 2025-02-13 19:34:49.411 [INFO][3546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.67/26] IPv6=[] ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" HandleID="k8s-pod-network.c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Workload="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.430273 containerd[1496]: 2025-02-13 19:34:49.414 [INFO][3532] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a9f7eab3-3eb1-4576-8c73-c7725488286e", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.430273 containerd[1496]: 2025-02-13 19:34:49.414 [INFO][3532] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.67/32] ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.430273 containerd[1496]: 2025-02-13 19:34:49.414 [INFO][3532] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.430273 containerd[1496]: 2025-02-13 19:34:49.417 [INFO][3532] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.430433 containerd[1496]: 2025-02-13 19:34:49.417 [INFO][3532] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"a9f7eab3-3eb1-4576-8c73-c7725488286e", ResourceVersion:"1252", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.124.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"aa:3a:4d:71:9e:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:49.430433 containerd[1496]: 2025-02-13 19:34:49.426 [INFO][3532] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.147-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:34:49.453808 containerd[1496]: time="2025-02-13T19:34:49.452899652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:49.453929 containerd[1496]: time="2025-02-13T19:34:49.453831332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:49.453929 containerd[1496]: time="2025-02-13T19:34:49.453879302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.454148 containerd[1496]: time="2025-02-13T19:34:49.454110188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:49.485236 systemd[1]: Started cri-containerd-c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d.scope - libcontainer container c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d. Feb 13 19:34:49.498353 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:49.527530 containerd[1496]: time="2025-02-13T19:34:49.527457858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a9f7eab3-3eb1-4576-8c73-c7725488286e,Namespace:default,Attempt:0,} returns sandbox id \"c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d\"" Feb 13 19:34:49.529417 containerd[1496]: time="2025-02-13T19:34:49.529380018Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:34:49.790540 kubelet[1803]: E0213 19:34:49.790255 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:50.775258 systemd-networkd[1440]: cali60e51b789ff: Gained IPv6LL Feb 13 19:34:50.791298 kubelet[1803]: E0213 19:34:50.791223 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:51.791925 kubelet[1803]: E0213 19:34:51.791846 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:52.793051 kubelet[1803]: E0213 19:34:52.792988 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:52.808899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963968533.mount: Deactivated successfully. Feb 13 19:34:53.793284 kubelet[1803]: E0213 19:34:53.793180 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:54.794213 kubelet[1803]: E0213 19:34:54.794139 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:55.657645 containerd[1496]: time="2025-02-13T19:34:55.657564685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.659034 containerd[1496]: time="2025-02-13T19:34:55.658956037Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:34:55.662307 containerd[1496]: time="2025-02-13T19:34:55.662259762Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.665009 containerd[1496]: time="2025-02-13T19:34:55.664954410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:55.666120 containerd[1496]: time="2025-02-13T19:34:55.666090951Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.13668242s" Feb 13 19:34:55.666169 containerd[1496]: time="2025-02-13T19:34:55.666120426Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:34:55.668514 containerd[1496]: time="2025-02-13T19:34:55.668482176Z" level=info msg="CreateContainer within sandbox \"c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:34:55.684496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649348491.mount: Deactivated successfully. Feb 13 19:34:55.685179 containerd[1496]: time="2025-02-13T19:34:55.685124536Z" level=info msg="CreateContainer within sandbox \"c0665511d7347f610423600fffb0f335ae14c5965f508da315d2c8ce462b9f5d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"9bdc3b5ab657c03b5bd08dadc6bb6e75b935c5d305d168b67e78dfb913f89fc6\"" Feb 13 19:34:55.685791 containerd[1496]: time="2025-02-13T19:34:55.685746087Z" level=info msg="StartContainer for \"9bdc3b5ab657c03b5bd08dadc6bb6e75b935c5d305d168b67e78dfb913f89fc6\"" Feb 13 19:34:55.722152 systemd[1]: Started cri-containerd-9bdc3b5ab657c03b5bd08dadc6bb6e75b935c5d305d168b67e78dfb913f89fc6.scope - libcontainer container 9bdc3b5ab657c03b5bd08dadc6bb6e75b935c5d305d168b67e78dfb913f89fc6. Feb 13 19:34:55.750990 containerd[1496]: time="2025-02-13T19:34:55.750924322Z" level=info msg="StartContainer for \"9bdc3b5ab657c03b5bd08dadc6bb6e75b935c5d305d168b67e78dfb913f89fc6\" returns successfully" Feb 13 19:34:55.794923 kubelet[1803]: E0213 19:34:55.794850 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:56.036982 kubelet[1803]: I0213 19:34:56.036885 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.8986193660000001 podStartE2EDuration="8.036868209s" podCreationTimestamp="2025-02-13 19:34:48 +0000 UTC" firstStartedPulling="2025-02-13 19:34:49.52895235 +0000 UTC m=+42.162442301" lastFinishedPulling="2025-02-13 19:34:55.667201193 +0000 UTC m=+48.300691144" observedRunningTime="2025-02-13 19:34:56.036686367 +0000 UTC m=+48.670176318" watchObservedRunningTime="2025-02-13 19:34:56.036868209 +0000 UTC m=+48.670358160" Feb 13 19:34:56.795878 kubelet[1803]: E0213 19:34:56.795760 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:57.796423 kubelet[1803]: E0213 19:34:57.796123 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:58.796557 kubelet[1803]: E0213 19:34:58.796489 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:34:59.796900 kubelet[1803]: E0213 19:34:59.796821 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:00.797723 kubelet[1803]: E0213 19:35:00.797647 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:01.798304 kubelet[1803]: E0213 19:35:01.798230 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:02.799121 kubelet[1803]: E0213 19:35:02.799047 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:03.800154 kubelet[1803]: E0213 19:35:03.800089 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:04.052294 kubelet[1803]: E0213 19:35:04.052172 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:04.801138 kubelet[1803]: E0213 19:35:04.801053 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:05.801451 kubelet[1803]: E0213 19:35:05.801370 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:06.658395 systemd[1]: Created slice kubepods-besteffort-pod3afe3508_049e_43a4_9eb6_f36a6493deaf.slice - libcontainer container kubepods-besteffort-pod3afe3508_049e_43a4_9eb6_f36a6493deaf.slice. Feb 13 19:35:06.717134 kubelet[1803]: I0213 19:35:06.717076 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9kkl\" (UniqueName: \"kubernetes.io/projected/3afe3508-049e-43a4-9eb6-f36a6493deaf-kube-api-access-h9kkl\") pod \"test-pod-1\" (UID: \"3afe3508-049e-43a4-9eb6-f36a6493deaf\") " pod="default/test-pod-1" Feb 13 19:35:06.717134 kubelet[1803]: I0213 19:35:06.717119 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-eb8a9fca-d0e4-488f-b9b1-c46c65256a3d\" (UniqueName: \"kubernetes.io/nfs/3afe3508-049e-43a4-9eb6-f36a6493deaf-pvc-eb8a9fca-d0e4-488f-b9b1-c46c65256a3d\") pod \"test-pod-1\" (UID: \"3afe3508-049e-43a4-9eb6-f36a6493deaf\") " pod="default/test-pod-1" Feb 13 19:35:06.802163 kubelet[1803]: E0213 19:35:06.802076 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:06.845005 kernel: FS-Cache: Loaded Feb 13 19:35:06.911724 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:35:06.911886 kernel: RPC: Registered udp transport module. Feb 13 19:35:06.911910 kernel: RPC: Registered tcp transport module. Feb 13 19:35:06.912475 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:35:06.914409 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:35:07.141182 kernel: NFS: Registering the id_resolver key type Feb 13 19:35:07.141414 kernel: Key type id_resolver registered Feb 13 19:35:07.141449 kernel: Key type id_legacy registered Feb 13 19:35:07.177574 nfsidmap[3761]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:35:07.184519 nfsidmap[3764]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:35:07.261763 containerd[1496]: time="2025-02-13T19:35:07.261671424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3afe3508-049e-43a4-9eb6-f36a6493deaf,Namespace:default,Attempt:0,}" Feb 13 19:35:07.433617 systemd-networkd[1440]: cali5ec59c6bf6e: Link UP Feb 13 19:35:07.434824 systemd-networkd[1440]: cali5ec59c6bf6e: Gained carrier Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.326 [INFO][3767] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.147-k8s-test--pod--1-eth0 default 3afe3508-049e-43a4-9eb6-f36a6493deaf 1339 0 2025-02-13 19:34:49 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.147 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.326 [INFO][3767] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.362 [INFO][3780] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" HandleID="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Workload="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.370 [INFO][3780] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" HandleID="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Workload="10.0.0.147-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000297b00), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.147", "pod":"test-pod-1", "timestamp":"2025-02-13 19:35:07.362140724 +0000 UTC"}, Hostname:"10.0.0.147", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.370 [INFO][3780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.370 [INFO][3780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.371 [INFO][3780] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.147' Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.373 [INFO][3780] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.376 [INFO][3780] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.380 [INFO][3780] ipam/ipam.go 489: Trying affinity for 192.168.124.64/26 host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.381 [INFO][3780] ipam/ipam.go 155: Attempting to load block cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.384 [INFO][3780] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.124.64/26 host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.384 [INFO][3780] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.124.64/26 handle="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.385 [INFO][3780] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216 Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.393 [INFO][3780] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.124.64/26 handle="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.428 [INFO][3780] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.124.68/26] block=192.168.124.64/26 handle="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.428 [INFO][3780] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.124.68/26] handle="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" host="10.0.0.147" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.428 [INFO][3780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.428 [INFO][3780] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.124.68/26] IPv6=[] ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" HandleID="k8s-pod-network.d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Workload="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.608329 containerd[1496]: 2025-02-13 19:35:07.431 [INFO][3767] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3afe3508-049e-43a4-9eb6-f36a6493deaf", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:35:07.609055 containerd[1496]: 2025-02-13 19:35:07.431 [INFO][3767] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.124.68/32] ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.609055 containerd[1496]: 2025-02-13 19:35:07.431 [INFO][3767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.609055 containerd[1496]: 2025-02-13 19:35:07.433 [INFO][3767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.609055 containerd[1496]: 2025-02-13 19:35:07.434 [INFO][3767] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.147-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3afe3508-049e-43a4-9eb6-f36a6493deaf", ResourceVersion:"1339", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.147", ContainerID:"d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.124.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"aa:e0:f8:7c:88:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:35:07.609055 containerd[1496]: 2025-02-13 19:35:07.605 [INFO][3767] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.147-k8s-test--pod--1-eth0" Feb 13 19:35:07.683666 containerd[1496]: time="2025-02-13T19:35:07.682743912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:35:07.683666 containerd[1496]: time="2025-02-13T19:35:07.683611251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:35:07.683666 containerd[1496]: time="2025-02-13T19:35:07.683629355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:07.684067 containerd[1496]: time="2025-02-13T19:35:07.683780209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:35:07.709273 systemd[1]: Started cri-containerd-d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216.scope - libcontainer container d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216. Feb 13 19:35:07.723802 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:35:07.750160 containerd[1496]: time="2025-02-13T19:35:07.750114092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3afe3508-049e-43a4-9eb6-f36a6493deaf,Namespace:default,Attempt:0,} returns sandbox id \"d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216\"" Feb 13 19:35:07.751509 containerd[1496]: time="2025-02-13T19:35:07.751471073Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:35:07.759092 kubelet[1803]: E0213 19:35:07.759045 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:07.774185 containerd[1496]: time="2025-02-13T19:35:07.774131753Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:35:07.774352 containerd[1496]: time="2025-02-13T19:35:07.774273800Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:35:07.774352 containerd[1496]: time="2025-02-13T19:35:07.774286073Z" level=info msg="StopPodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:35:07.774689 containerd[1496]: time="2025-02-13T19:35:07.774664484Z" level=info msg="RemovePodSandbox for \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:35:07.774749 containerd[1496]: time="2025-02-13T19:35:07.774705201Z" level=info msg="Forcibly stopping sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\"" Feb 13 19:35:07.774820 containerd[1496]: time="2025-02-13T19:35:07.774773409Z" level=info msg="TearDown network for sandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" successfully" Feb 13 19:35:07.802892 kubelet[1803]: E0213 19:35:07.802851 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:07.895115 containerd[1496]: time="2025-02-13T19:35:07.895032664Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:07.895115 containerd[1496]: time="2025-02-13T19:35:07.895126991Z" level=info msg="RemovePodSandbox \"2d5407477e895d47961fc614874df08a11952f1b5b3699908fcf06831a5ee510\" returns successfully" Feb 13 19:35:07.895856 containerd[1496]: time="2025-02-13T19:35:07.895803643Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:35:07.896107 containerd[1496]: time="2025-02-13T19:35:07.896084040Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:35:07.896107 containerd[1496]: time="2025-02-13T19:35:07.896104348Z" level=info msg="StopPodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:35:07.896410 containerd[1496]: time="2025-02-13T19:35:07.896387350Z" level=info msg="RemovePodSandbox for \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:35:07.896468 containerd[1496]: time="2025-02-13T19:35:07.896413278Z" level=info msg="Forcibly stopping sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\"" Feb 13 19:35:07.896546 containerd[1496]: time="2025-02-13T19:35:07.896493039Z" level=info msg="TearDown network for sandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" successfully" Feb 13 19:35:08.003224 containerd[1496]: time="2025-02-13T19:35:08.002956684Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.003224 containerd[1496]: time="2025-02-13T19:35:08.003088883Z" level=info msg="RemovePodSandbox \"9cd34e05953edec13a93645610cc390f9412c5dfa27e9150f547738284ab4ce0\" returns successfully" Feb 13 19:35:08.003994 containerd[1496]: time="2025-02-13T19:35:08.003838733Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:35:08.004195 containerd[1496]: time="2025-02-13T19:35:08.004107417Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:35:08.004195 containerd[1496]: time="2025-02-13T19:35:08.004123898Z" level=info msg="StopPodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:35:08.004601 containerd[1496]: time="2025-02-13T19:35:08.004570076Z" level=info msg="RemovePodSandbox for \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:35:08.004601 containerd[1496]: time="2025-02-13T19:35:08.004601656Z" level=info msg="Forcibly stopping sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\"" Feb 13 19:35:08.004770 containerd[1496]: time="2025-02-13T19:35:08.004703107Z" level=info msg="TearDown network for sandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" successfully" Feb 13 19:35:08.116065 containerd[1496]: time="2025-02-13T19:35:08.115905771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.116065 containerd[1496]: time="2025-02-13T19:35:08.116073116Z" level=info msg="RemovePodSandbox \"61c8eb8e8d761ba2b46d79add005137b6fb7a47d81065b94b4c7a70143810d6e\" returns successfully" Feb 13 19:35:08.116864 containerd[1496]: time="2025-02-13T19:35:08.116825210Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:35:08.117089 containerd[1496]: time="2025-02-13T19:35:08.117066342Z" level=info msg="TearDown network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" successfully" Feb 13 19:35:08.117089 containerd[1496]: time="2025-02-13T19:35:08.117086881Z" level=info msg="StopPodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" returns successfully" Feb 13 19:35:08.117591 containerd[1496]: time="2025-02-13T19:35:08.117538780Z" level=info msg="RemovePodSandbox for \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:35:08.117591 containerd[1496]: time="2025-02-13T19:35:08.117588654Z" level=info msg="Forcibly stopping sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\"" Feb 13 19:35:08.117807 containerd[1496]: time="2025-02-13T19:35:08.117697969Z" level=info msg="TearDown network for sandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" successfully" Feb 13 19:35:08.155797 containerd[1496]: time="2025-02-13T19:35:08.155714881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.155797 containerd[1496]: time="2025-02-13T19:35:08.155794330Z" level=info msg="RemovePodSandbox \"593e93a7c93e97fa8c6ff251c129baa378619bf760122565567f6854c50fbb3c\" returns successfully" Feb 13 19:35:08.156512 containerd[1496]: time="2025-02-13T19:35:08.156466133Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" Feb 13 19:35:08.156643 containerd[1496]: time="2025-02-13T19:35:08.156628669Z" level=info msg="TearDown network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" successfully" Feb 13 19:35:08.156668 containerd[1496]: time="2025-02-13T19:35:08.156646472Z" level=info msg="StopPodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" returns successfully" Feb 13 19:35:08.157025 containerd[1496]: time="2025-02-13T19:35:08.156996069Z" level=info msg="RemovePodSandbox for \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" Feb 13 19:35:08.157203 containerd[1496]: time="2025-02-13T19:35:08.157025424Z" level=info msg="Forcibly stopping sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\"" Feb 13 19:35:08.157203 containerd[1496]: time="2025-02-13T19:35:08.157113901Z" level=info msg="TearDown network for sandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" successfully" Feb 13 19:35:08.191099 containerd[1496]: time="2025-02-13T19:35:08.191063550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.191192 containerd[1496]: time="2025-02-13T19:35:08.191128783Z" level=info msg="RemovePodSandbox \"a033c0a45f6d0e7992542d92dec7c1201f2a3d238f279c6eaa54944f24b8445d\" returns successfully" Feb 13 19:35:08.191675 containerd[1496]: time="2025-02-13T19:35:08.191648909Z" level=info msg="StopPodSandbox for \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\"" Feb 13 19:35:08.191765 containerd[1496]: time="2025-02-13T19:35:08.191742917Z" level=info msg="TearDown network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" successfully" Feb 13 19:35:08.191765 containerd[1496]: time="2025-02-13T19:35:08.191759137Z" level=info msg="StopPodSandbox for \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" returns successfully" Feb 13 19:35:08.192230 containerd[1496]: time="2025-02-13T19:35:08.192200116Z" level=info msg="RemovePodSandbox for \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\"" Feb 13 19:35:08.192286 containerd[1496]: time="2025-02-13T19:35:08.192233198Z" level=info msg="Forcibly stopping sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\"" Feb 13 19:35:08.192387 containerd[1496]: time="2025-02-13T19:35:08.192319330Z" level=info msg="TearDown network for sandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" successfully" Feb 13 19:35:08.329426 containerd[1496]: time="2025-02-13T19:35:08.329188387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.329426 containerd[1496]: time="2025-02-13T19:35:08.329269429Z" level=info msg="RemovePodSandbox \"67e1b1967dd532774ef6b7e02b6a4358c36eafbba3613f2cd9b564cfd0d26d26\" returns successfully" Feb 13 19:35:08.330127 containerd[1496]: time="2025-02-13T19:35:08.329851534Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:35:08.330127 containerd[1496]: time="2025-02-13T19:35:08.330023095Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:35:08.330127 containerd[1496]: time="2025-02-13T19:35:08.330035098Z" level=info msg="StopPodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:35:08.330373 containerd[1496]: time="2025-02-13T19:35:08.330338228Z" level=info msg="RemovePodSandbox for \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:35:08.330373 containerd[1496]: time="2025-02-13T19:35:08.330373845Z" level=info msg="Forcibly stopping sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\"" Feb 13 19:35:08.330495 containerd[1496]: time="2025-02-13T19:35:08.330441021Z" level=info msg="TearDown network for sandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" successfully" Feb 13 19:35:08.442949 containerd[1496]: time="2025-02-13T19:35:08.442854661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.442949 containerd[1496]: time="2025-02-13T19:35:08.442944610Z" level=info msg="RemovePodSandbox \"a13248a01d5d3c9fc80f814e6f9f5286b4cc830a32ecc6dcc90894db7cbb4737\" returns successfully" Feb 13 19:35:08.443576 containerd[1496]: time="2025-02-13T19:35:08.443520503Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:35:08.443736 containerd[1496]: time="2025-02-13T19:35:08.443639646Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:35:08.443736 containerd[1496]: time="2025-02-13T19:35:08.443650236Z" level=info msg="StopPodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:35:08.444188 containerd[1496]: time="2025-02-13T19:35:08.444151207Z" level=info msg="RemovePodSandbox for \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:35:08.444280 containerd[1496]: time="2025-02-13T19:35:08.444196362Z" level=info msg="Forcibly stopping sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\"" Feb 13 19:35:08.444336 containerd[1496]: time="2025-02-13T19:35:08.444304796Z" level=info msg="TearDown network for sandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" successfully" Feb 13 19:35:08.531536 containerd[1496]: time="2025-02-13T19:35:08.531454030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.531536 containerd[1496]: time="2025-02-13T19:35:08.531525514Z" level=info msg="RemovePodSandbox \"40ce0d3f93ac9bc36831d03be049f80ec3b3ce5288313163d0a969a5767140ca\" returns successfully" Feb 13 19:35:08.532245 containerd[1496]: time="2025-02-13T19:35:08.532205541Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:35:08.532406 containerd[1496]: time="2025-02-13T19:35:08.532377725Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:35:08.532406 containerd[1496]: time="2025-02-13T19:35:08.532394486Z" level=info msg="StopPodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:35:08.532785 containerd[1496]: time="2025-02-13T19:35:08.532743462Z" level=info msg="RemovePodSandbox for \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:35:08.532839 containerd[1496]: time="2025-02-13T19:35:08.532802834Z" level=info msg="Forcibly stopping sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\"" Feb 13 19:35:08.532991 containerd[1496]: time="2025-02-13T19:35:08.532924031Z" level=info msg="TearDown network for sandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" successfully" Feb 13 19:35:08.686147 containerd[1496]: time="2025-02-13T19:35:08.686091809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.686319 containerd[1496]: time="2025-02-13T19:35:08.686162301Z" level=info msg="RemovePodSandbox \"dca75b6439788db7d359786128b7554681c0a80286036cfa38bf74493e3a86e5\" returns successfully" Feb 13 19:35:08.686696 containerd[1496]: time="2025-02-13T19:35:08.686665897Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:35:08.686808 containerd[1496]: time="2025-02-13T19:35:08.686792455Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:35:08.686839 containerd[1496]: time="2025-02-13T19:35:08.686809036Z" level=info msg="StopPodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:35:08.687144 containerd[1496]: time="2025-02-13T19:35:08.687099352Z" level=info msg="RemovePodSandbox for \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:35:08.687144 containerd[1496]: time="2025-02-13T19:35:08.687130881Z" level=info msg="Forcibly stopping sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\"" Feb 13 19:35:08.687268 containerd[1496]: time="2025-02-13T19:35:08.687214548Z" level=info msg="TearDown network for sandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" successfully" Feb 13 19:35:08.803066 kubelet[1803]: E0213 19:35:08.803005 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:08.931106 containerd[1496]: time="2025-02-13T19:35:08.931023890Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:08.931106 containerd[1496]: time="2025-02-13T19:35:08.931100804Z" level=info msg="RemovePodSandbox \"4f93ae63a51e7d7b90f83837afcb9cc5720c546a09668848c235c377c73a835d\" returns successfully" Feb 13 19:35:08.931787 containerd[1496]: time="2025-02-13T19:35:08.931730588Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:35:08.932002 containerd[1496]: time="2025-02-13T19:35:08.931870041Z" level=info msg="TearDown network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" successfully" Feb 13 19:35:08.932002 containerd[1496]: time="2025-02-13T19:35:08.931886802Z" level=info msg="StopPodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" returns successfully" Feb 13 19:35:08.932229 containerd[1496]: time="2025-02-13T19:35:08.932198768Z" level=info msg="RemovePodSandbox for \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:35:08.932229 containerd[1496]: time="2025-02-13T19:35:08.932227562Z" level=info msg="Forcibly stopping sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\"" Feb 13 19:35:08.932385 containerd[1496]: time="2025-02-13T19:35:08.932305889Z" level=info msg="TearDown network for sandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" successfully" Feb 13 19:35:09.079219 systemd-networkd[1440]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:35:09.101607 containerd[1496]: time="2025-02-13T19:35:09.101539763Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:09.101724 containerd[1496]: time="2025-02-13T19:35:09.101617149Z" level=info msg="RemovePodSandbox \"6487c38d3d737762e509c2c676bed16add681d6c50a9ba9df1822a12e95fe016\" returns successfully" Feb 13 19:35:09.102159 containerd[1496]: time="2025-02-13T19:35:09.102125885Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" Feb 13 19:35:09.102266 containerd[1496]: time="2025-02-13T19:35:09.102244708Z" level=info msg="TearDown network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" successfully" Feb 13 19:35:09.102266 containerd[1496]: time="2025-02-13T19:35:09.102260287Z" level=info msg="StopPodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" returns successfully" Feb 13 19:35:09.102596 containerd[1496]: time="2025-02-13T19:35:09.102563427Z" level=info msg="RemovePodSandbox for \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" Feb 13 19:35:09.102655 containerd[1496]: time="2025-02-13T19:35:09.102595066Z" level=info msg="Forcibly stopping sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\"" Feb 13 19:35:09.102725 containerd[1496]: time="2025-02-13T19:35:09.102675117Z" level=info msg="TearDown network for sandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" successfully" Feb 13 19:35:09.117310 containerd[1496]: time="2025-02-13T19:35:09.117250013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:09.117310 containerd[1496]: time="2025-02-13T19:35:09.117310537Z" level=info msg="RemovePodSandbox \"9d124b1bdd0bfef2b16af274e9fb7d2140f25bc2eecd4f24e9f4718f75274b30\" returns successfully" Feb 13 19:35:09.117804 containerd[1496]: time="2025-02-13T19:35:09.117776753Z" level=info msg="StopPodSandbox for \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\"" Feb 13 19:35:09.117915 containerd[1496]: time="2025-02-13T19:35:09.117895265Z" level=info msg="TearDown network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" successfully" Feb 13 19:35:09.117943 containerd[1496]: time="2025-02-13T19:35:09.117911486Z" level=info msg="StopPodSandbox for \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" returns successfully" Feb 13 19:35:09.118198 containerd[1496]: time="2025-02-13T19:35:09.118167597Z" level=info msg="RemovePodSandbox for \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\"" Feb 13 19:35:09.118248 containerd[1496]: time="2025-02-13T19:35:09.118198465Z" level=info msg="Forcibly stopping sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\"" Feb 13 19:35:09.118348 containerd[1496]: time="2025-02-13T19:35:09.118278184Z" level=info msg="TearDown network for sandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" successfully" Feb 13 19:35:09.125864 containerd[1496]: time="2025-02-13T19:35:09.125783728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:09.125864 containerd[1496]: time="2025-02-13T19:35:09.125861373Z" level=info msg="RemovePodSandbox \"d04b37691224bce02a3175bdc4b6a52073600a1dfd7a6cd28f010a1638dd3fd2\" returns successfully" Feb 13 19:35:09.232433 containerd[1496]: time="2025-02-13T19:35:09.232361155Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:35:09.235446 containerd[1496]: time="2025-02-13T19:35:09.235365159Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:35:09.238409 containerd[1496]: time="2025-02-13T19:35:09.238370557Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 1.486853066s" Feb 13 19:35:09.238409 containerd[1496]: time="2025-02-13T19:35:09.238402516Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:35:09.240574 containerd[1496]: time="2025-02-13T19:35:09.240533650Z" level=info msg="CreateContainer within sandbox \"d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:35:09.307429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667651041.mount: Deactivated successfully. Feb 13 19:35:09.316001 containerd[1496]: time="2025-02-13T19:35:09.315924709Z" level=info msg="CreateContainer within sandbox \"d8809e02da55ee5d421d24bc94be86f9d11226a7c5cf53024d3458cff3a9d216\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"58cdcc3cce41e2802f91b5ef3933c60b5cb41da81bc9f42902309d0ced98857f\"" Feb 13 19:35:09.316698 containerd[1496]: time="2025-02-13T19:35:09.316614034Z" level=info msg="StartContainer for \"58cdcc3cce41e2802f91b5ef3933c60b5cb41da81bc9f42902309d0ced98857f\"" Feb 13 19:35:09.352137 systemd[1]: Started cri-containerd-58cdcc3cce41e2802f91b5ef3933c60b5cb41da81bc9f42902309d0ced98857f.scope - libcontainer container 58cdcc3cce41e2802f91b5ef3933c60b5cb41da81bc9f42902309d0ced98857f. Feb 13 19:35:09.398213 containerd[1496]: time="2025-02-13T19:35:09.398124792Z" level=info msg="StartContainer for \"58cdcc3cce41e2802f91b5ef3933c60b5cb41da81bc9f42902309d0ced98857f\" returns successfully" Feb 13 19:35:09.803343 kubelet[1803]: E0213 19:35:09.803246 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:10.080854 kubelet[1803]: I0213 19:35:10.080575 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.592485506 podStartE2EDuration="21.080557322s" podCreationTimestamp="2025-02-13 19:34:49 +0000 UTC" firstStartedPulling="2025-02-13 19:35:07.751037989 +0000 UTC m=+60.384527940" lastFinishedPulling="2025-02-13 19:35:09.239109805 +0000 UTC m=+61.872599756" observedRunningTime="2025-02-13 19:35:10.080424962 +0000 UTC m=+62.713914913" watchObservedRunningTime="2025-02-13 19:35:10.080557322 +0000 UTC m=+62.714047273" Feb 13 19:35:10.804537 kubelet[1803]: E0213 19:35:10.804465 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:35:11.804964 kubelet[1803]: E0213 19:35:11.804883 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"