Feb 13 19:51:02.939829 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:51:02.939852 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:02.939863 kernel: BIOS-provided physical RAM map: Feb 13 19:51:02.939870 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:51:02.939884 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:51:02.939890 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:51:02.939898 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:51:02.939905 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:51:02.939911 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:51:02.939917 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:51:02.939926 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:51:02.939932 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:51:02.939938 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:51:02.939944 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:51:02.939952 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:51:02.939959 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:51:02.939968 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:51:02.939975 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:51:02.939981 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:51:02.939988 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:51:02.939994 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:51:02.940001 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:51:02.940007 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:51:02.940014 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:51:02.940020 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:51:02.940027 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:51:02.940033 kernel: NX (Execute Disable) protection: active Feb 13 19:51:02.940042 kernel: APIC: Static calls initialized Feb 13 19:51:02.940049 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:51:02.940069 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:51:02.940077 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:51:02.940091 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:51:02.940100 kernel: extended physical RAM map: Feb 13 19:51:02.940110 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:51:02.940119 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:51:02.940127 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:51:02.940136 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:51:02.940145 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:51:02.940157 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:51:02.940164 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:51:02.940174 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:51:02.940181 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:51:02.940188 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:51:02.940195 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:51:02.940202 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:51:02.940211 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:51:02.940218 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:51:02.940225 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:51:02.940232 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:51:02.940239 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:51:02.940246 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:51:02.940253 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:51:02.940260 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:51:02.940267 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:51:02.940276 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:51:02.940283 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:51:02.940290 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:51:02.940302 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:51:02.940316 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:51:02.940326 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:51:02.940335 kernel: efi: EFI v2.7 by EDK II Feb 13 19:51:02.940344 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:51:02.940353 kernel: random: crng init done Feb 13 19:51:02.940363 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:51:02.940372 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:51:02.940384 kernel: secureboot: Secure boot disabled Feb 13 19:51:02.940391 kernel: SMBIOS 2.8 present. Feb 13 19:51:02.940398 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:51:02.940405 kernel: Hypervisor detected: KVM Feb 13 19:51:02.940412 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:51:02.940419 kernel: kvm-clock: using sched offset of 2798982248 cycles Feb 13 19:51:02.940426 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:51:02.940434 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:51:02.940441 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:51:02.940449 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:51:02.940456 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:51:02.940465 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:51:02.940473 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:51:02.940480 kernel: Using GB pages for direct mapping Feb 13 19:51:02.940487 kernel: ACPI: Early table checksum verification disabled Feb 13 19:51:02.940494 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:51:02.940501 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:51:02.940509 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940516 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940523 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:51:02.940532 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940540 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940547 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940555 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:51:02.940564 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:51:02.940573 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:51:02.940582 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:51:02.940591 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:51:02.940603 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:51:02.940611 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:51:02.940620 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:51:02.940629 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:51:02.940638 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:51:02.940647 kernel: No NUMA configuration found Feb 13 19:51:02.940655 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:51:02.940664 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:51:02.940673 kernel: Zone ranges: Feb 13 19:51:02.940682 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:51:02.940691 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:51:02.940698 kernel: Normal empty Feb 13 19:51:02.940705 kernel: Movable zone start for each node Feb 13 19:51:02.940712 kernel: Early memory node ranges Feb 13 19:51:02.940719 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:51:02.940726 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:51:02.940733 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:51:02.940740 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:51:02.940747 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:51:02.940757 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:51:02.940764 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:51:02.940771 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:51:02.940778 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:51:02.940785 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:51:02.940792 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:51:02.940806 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:51:02.940816 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:51:02.940823 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:51:02.940831 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:51:02.940838 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:51:02.940845 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:51:02.940855 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:51:02.940862 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:51:02.940869 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:51:02.940885 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:51:02.940893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:51:02.940902 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:51:02.940910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:51:02.940917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:51:02.940925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:51:02.940932 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:51:02.940939 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:51:02.940947 kernel: TSC deadline timer available Feb 13 19:51:02.940954 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:51:02.940961 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:51:02.940971 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:51:02.940978 kernel: kvm-guest: setup PV sched yield Feb 13 19:51:02.940985 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:51:02.940993 kernel: Booting paravirtualized kernel on KVM Feb 13 19:51:02.941000 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:51:02.941008 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:51:02.941015 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:51:02.941023 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:51:02.941030 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:51:02.941039 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:51:02.941047 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:51:02.941069 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:02.941077 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:51:02.941084 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:51:02.941092 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:51:02.941099 kernel: Fallback order for Node 0: 0 Feb 13 19:51:02.941107 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:51:02.941114 kernel: Policy zone: DMA32 Feb 13 19:51:02.941124 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:51:02.941132 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 19:51:02.941140 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:51:02.941147 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:51:02.941154 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:51:02.941162 kernel: Dynamic Preempt: voluntary Feb 13 19:51:02.941169 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:51:02.941178 kernel: rcu: RCU event tracing is enabled. Feb 13 19:51:02.941193 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:51:02.941212 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:51:02.941222 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:51:02.941231 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:51:02.941242 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:51:02.941252 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:51:02.941260 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:51:02.941271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:51:02.941281 kernel: Console: colour dummy device 80x25 Feb 13 19:51:02.941291 kernel: printk: console [ttyS0] enabled Feb 13 19:51:02.941305 kernel: ACPI: Core revision 20230628 Feb 13 19:51:02.941314 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:51:02.941321 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:51:02.941329 kernel: x2apic enabled Feb 13 19:51:02.941336 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:51:02.941344 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:51:02.941351 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:51:02.941359 kernel: kvm-guest: setup PV IPIs Feb 13 19:51:02.941366 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:51:02.941376 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:51:02.941383 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:51:02.941391 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:51:02.941398 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:51:02.941406 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:51:02.941413 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:51:02.941420 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:51:02.941428 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:51:02.941435 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:51:02.941445 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:51:02.941452 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:51:02.941460 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:51:02.941468 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:51:02.941475 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:51:02.941483 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:51:02.941491 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:51:02.941498 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:51:02.941508 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:51:02.941515 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:51:02.941523 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:51:02.941530 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:51:02.941538 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:51:02.941545 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:51:02.941552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:51:02.941560 kernel: landlock: Up and running. Feb 13 19:51:02.941567 kernel: SELinux: Initializing. Feb 13 19:51:02.941577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:02.941584 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:51:02.941592 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:51:02.941599 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:51:02.941607 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:51:02.941614 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:51:02.941622 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:51:02.941629 kernel: ... version: 0 Feb 13 19:51:02.941637 kernel: ... bit width: 48 Feb 13 19:51:02.941646 kernel: ... generic registers: 6 Feb 13 19:51:02.941654 kernel: ... value mask: 0000ffffffffffff Feb 13 19:51:02.941663 kernel: ... max period: 00007fffffffffff Feb 13 19:51:02.941671 kernel: ... fixed-purpose events: 0 Feb 13 19:51:02.941678 kernel: ... event mask: 000000000000003f Feb 13 19:51:02.941686 kernel: signal: max sigframe size: 1776 Feb 13 19:51:02.941696 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:51:02.941704 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:51:02.941713 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:51:02.941723 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:51:02.941731 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:51:02.941738 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:51:02.941745 kernel: smpboot: Max logical packages: 1 Feb 13 19:51:02.941753 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:51:02.941760 kernel: devtmpfs: initialized Feb 13 19:51:02.941767 kernel: x86/mm: Memory block size: 128MB Feb 13 19:51:02.941775 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:51:02.941783 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:51:02.941792 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:51:02.941800 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:51:02.941808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:51:02.941815 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:51:02.941823 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:51:02.941830 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:51:02.941838 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:51:02.941845 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:51:02.941853 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:51:02.941863 kernel: audit: type=2000 audit(1739476262.918:1): state=initialized audit_enabled=0 res=1 Feb 13 19:51:02.941870 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:51:02.941886 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:51:02.941894 kernel: cpuidle: using governor menu Feb 13 19:51:02.941902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:51:02.941909 kernel: dca service started, version 1.12.1 Feb 13 19:51:02.941917 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:51:02.941924 kernel: PCI: Using configuration type 1 for base access Feb 13 19:51:02.941932 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:51:02.941942 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:51:02.941949 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:51:02.941957 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:51:02.941964 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:51:02.941972 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:51:02.941979 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:51:02.941987 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:51:02.941994 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:51:02.942001 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:51:02.942011 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:51:02.942019 kernel: ACPI: Interpreter enabled Feb 13 19:51:02.942026 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:51:02.942033 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:51:02.942041 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:51:02.942048 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:51:02.942069 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:51:02.942076 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:51:02.942285 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:51:02.942453 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:51:02.942583 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:51:02.942594 kernel: PCI host bridge to bus 0000:00 Feb 13 19:51:02.942721 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:51:02.942835 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:51:02.942958 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:51:02.943090 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:51:02.943204 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:51:02.943314 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:51:02.943424 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:51:02.943565 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:51:02.943705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:51:02.943834 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:51:02.943966 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:51:02.944102 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:51:02.944225 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:51:02.944348 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:51:02.944515 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:51:02.944659 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:51:02.944808 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:51:02.944943 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:51:02.945093 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:51:02.945233 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:51:02.945370 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:51:02.945493 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:51:02.945624 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:51:02.945798 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:51:02.945963 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:51:02.946102 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:51:02.946225 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:51:02.946355 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:51:02.946478 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:51:02.946618 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:51:02.946761 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:51:02.946889 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:51:02.947123 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:51:02.947314 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:51:02.947326 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:51:02.947334 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:51:02.947342 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:51:02.947354 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:51:02.947362 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:51:02.947370 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:51:02.947377 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:51:02.947385 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:51:02.947392 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:51:02.947400 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:51:02.947408 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:51:02.947415 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:51:02.947425 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:51:02.947433 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:51:02.947440 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:51:02.947447 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:51:02.947455 kernel: iommu: Default domain type: Translated Feb 13 19:51:02.947462 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:51:02.947470 kernel: efivars: Registered efivars operations Feb 13 19:51:02.947477 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:51:02.947485 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:51:02.947496 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:51:02.947503 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:51:02.947511 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:51:02.947519 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:51:02.947526 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:51:02.947534 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:51:02.947541 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:51:02.947549 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:51:02.947672 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:51:02.947804 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:51:02.947936 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:51:02.947947 kernel: vgaarb: loaded Feb 13 19:51:02.947955 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:51:02.947963 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:51:02.947971 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:51:02.947979 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:51:02.947987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:51:02.947998 kernel: pnp: PnP ACPI init Feb 13 19:51:02.948160 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:51:02.948172 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:51:02.948181 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:51:02.948189 kernel: NET: Registered PF_INET protocol family Feb 13 19:51:02.948216 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:51:02.948227 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:51:02.948235 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:51:02.948246 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:51:02.948254 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:51:02.948262 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:51:02.948270 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:02.948279 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:51:02.948287 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:51:02.948295 kernel: NET: Registered PF_XDP protocol family Feb 13 19:51:02.948420 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:51:02.948544 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:51:02.948663 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:51:02.948775 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:51:02.948902 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:51:02.949110 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:51:02.949224 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:51:02.949342 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:51:02.949353 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:51:02.949362 kernel: Initialise system trusted keyrings Feb 13 19:51:02.949374 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:51:02.949382 kernel: Key type asymmetric registered Feb 13 19:51:02.949391 kernel: Asymmetric key parser 'x509' registered Feb 13 19:51:02.949399 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:51:02.949417 kernel: io scheduler mq-deadline registered Feb 13 19:51:02.949425 kernel: io scheduler kyber registered Feb 13 19:51:02.949433 kernel: io scheduler bfq registered Feb 13 19:51:02.949449 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:51:02.949458 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:51:02.949470 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:51:02.949480 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:51:02.949489 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:51:02.949497 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:51:02.949505 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:51:02.949514 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:51:02.949524 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:51:02.949657 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:51:02.949669 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:51:02.949783 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:51:02.949909 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:51:02 UTC (1739476262) Feb 13 19:51:02.950025 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:51:02.950036 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:51:02.950048 kernel: efifb: probing for efifb Feb 13 19:51:02.950072 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:51:02.950080 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:51:02.950089 kernel: efifb: scrolling: redraw Feb 13 19:51:02.950097 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:51:02.950105 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:51:02.950113 kernel: fb0: EFI VGA frame buffer device Feb 13 19:51:02.950121 kernel: pstore: Using crash dump compression: deflate Feb 13 19:51:02.950130 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:51:02.950138 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:51:02.950149 kernel: Segment Routing with IPv6 Feb 13 19:51:02.950157 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:51:02.950165 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:51:02.950173 kernel: Key type dns_resolver registered Feb 13 19:51:02.950181 kernel: IPI shorthand broadcast: enabled Feb 13 19:51:02.950189 kernel: sched_clock: Marking stable (663060768, 164057834)->(848300504, -21181902) Feb 13 19:51:02.950198 kernel: registered taskstats version 1 Feb 13 19:51:02.950206 kernel: Loading compiled-in X.509 certificates Feb 13 19:51:02.950215 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:51:02.950225 kernel: Key type .fscrypt registered Feb 13 19:51:02.950233 kernel: Key type fscrypt-provisioning registered Feb 13 19:51:02.950241 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:51:02.950249 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:51:02.950257 kernel: ima: No architecture policies found Feb 13 19:51:02.950265 kernel: clk: Disabling unused clocks Feb 13 19:51:02.950274 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:51:02.950282 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:51:02.950293 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:51:02.950301 kernel: Run /init as init process Feb 13 19:51:02.950309 kernel: with arguments: Feb 13 19:51:02.950317 kernel: /init Feb 13 19:51:02.950325 kernel: with environment: Feb 13 19:51:02.950333 kernel: HOME=/ Feb 13 19:51:02.950341 kernel: TERM=linux Feb 13 19:51:02.950349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:51:02.950359 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:02.950373 systemd[1]: Detected virtualization kvm. Feb 13 19:51:02.950382 systemd[1]: Detected architecture x86-64. Feb 13 19:51:02.950390 systemd[1]: Running in initrd. Feb 13 19:51:02.950400 systemd[1]: No hostname configured, using default hostname. Feb 13 19:51:02.950419 systemd[1]: Hostname set to . Feb 13 19:51:02.950432 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:51:02.950442 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:51:02.950451 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:02.950463 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:02.950473 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:51:02.950482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:02.950490 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:51:02.950499 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:51:02.950509 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:51:02.950520 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:51:02.950529 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:02.950537 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:02.950546 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:02.950554 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:02.950562 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:02.950571 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:02.950579 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:02.950588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:02.950598 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:51:02.950607 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:51:02.950615 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:02.950624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:02.950634 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:02.950643 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:02.950651 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:51:02.950660 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:02.950670 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:51:02.950679 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:51:02.950687 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:02.950695 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:02.950703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:02.950712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:02.950720 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:02.950728 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:51:02.950764 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:51:02.950788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:51:02.950797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:02.950805 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:02.950814 systemd-journald[194]: Journal started Feb 13 19:51:02.950833 systemd-journald[194]: Runtime Journal (/run/log/journal/3592ed6681844ec29799d2a2fac47f41) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:51:02.949495 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:51:02.955115 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:02.955567 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:51:02.961493 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:02.965256 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:02.973999 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:02.977197 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:02.980130 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:02.986102 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:51:02.988975 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:51:02.990259 kernel: Bridge firewalling registered Feb 13 19:51:02.992280 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:51:02.994495 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:02.998197 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:03.008275 dracut-cmdline[223]: dracut-dracut-053 Feb 13 19:51:03.012088 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:51:03.012338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:03.027221 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:03.060701 systemd-resolved[250]: Positive Trust Anchors: Feb 13 19:51:03.060717 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:03.060749 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:03.072097 systemd-resolved[250]: Defaulting to hostname 'linux'. Feb 13 19:51:03.074197 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:03.074801 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:03.109104 kernel: SCSI subsystem initialized Feb 13 19:51:03.120091 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:51:03.131101 kernel: iscsi: registered transport (tcp) Feb 13 19:51:03.158238 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:51:03.158309 kernel: QLogic iSCSI HBA Driver Feb 13 19:51:03.212754 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:03.222257 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:51:03.250115 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:51:03.250194 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:51:03.251259 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:51:03.294101 kernel: raid6: avx2x4 gen() 27663 MB/s Feb 13 19:51:03.311093 kernel: raid6: avx2x2 gen() 26755 MB/s Feb 13 19:51:03.328222 kernel: raid6: avx2x1 gen() 23345 MB/s Feb 13 19:51:03.328290 kernel: raid6: using algorithm avx2x4 gen() 27663 MB/s Feb 13 19:51:03.346211 kernel: raid6: .... xor() 7114 MB/s, rmw enabled Feb 13 19:51:03.346280 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:51:03.367096 kernel: xor: automatically using best checksumming function avx Feb 13 19:51:03.531088 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:51:03.547774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:03.561464 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:03.577069 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:51:03.582219 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:03.597272 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:51:03.614889 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Feb 13 19:51:03.655209 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:03.666212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:03.736292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:03.749330 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:51:03.762581 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:03.772864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:03.773399 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:03.773823 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:03.782461 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:51:03.819097 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:51:03.849866 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:51:03.850016 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:51:03.850034 kernel: libata version 3.00 loaded. Feb 13 19:51:03.850463 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:51:03.850485 kernel: GPT:9289727 != 19775487 Feb 13 19:51:03.850500 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:51:03.850523 kernel: GPT:9289727 != 19775487 Feb 13 19:51:03.850537 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:51:03.850551 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:03.796899 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:03.843489 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:03.843605 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:03.880111 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:51:03.897900 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:51:03.897916 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:51:03.898079 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:51:03.898222 kernel: scsi host0: ahci Feb 13 19:51:03.898401 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:51:03.898423 kernel: scsi host1: ahci Feb 13 19:51:03.898604 kernel: AES CTR mode by8 optimization enabled Feb 13 19:51:03.898620 kernel: scsi host2: ahci Feb 13 19:51:03.898809 kernel: scsi host3: ahci Feb 13 19:51:03.898968 kernel: scsi host4: ahci Feb 13 19:51:03.899133 kernel: scsi host5: ahci Feb 13 19:51:03.899309 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:51:03.899322 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:51:03.899333 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:51:03.899344 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:51:03.899354 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:51:03.899364 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:51:03.848571 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:03.849707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:03.912560 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Feb 13 19:51:03.912584 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (460) Feb 13 19:51:03.849825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:03.873168 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:03.886319 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:03.911442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:03.911667 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:03.927129 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:51:03.936208 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:51:03.942083 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:51:03.942562 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:51:03.947456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:51:03.983344 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:51:03.984657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:03.999514 disk-uuid[556]: Primary Header is updated. Feb 13 19:51:03.999514 disk-uuid[556]: Secondary Entries is updated. Feb 13 19:51:03.999514 disk-uuid[556]: Secondary Header is updated. Feb 13 19:51:04.003604 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:04.011643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:04.027279 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:51:04.052119 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:04.204118 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:51:04.212100 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:51:04.212190 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:51:04.213089 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:51:04.214088 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:51:04.215104 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:51:04.216230 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:51:04.216254 kernel: ata3.00: applying bridge limits Feb 13 19:51:04.217378 kernel: ata3.00: configured for UDMA/100 Feb 13 19:51:04.218093 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:51:04.272110 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:51:04.285277 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:51:04.285300 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:51:05.010088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:51:05.010622 disk-uuid[558]: The operation has completed successfully. Feb 13 19:51:05.044231 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:51:05.044379 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:51:05.070319 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:51:05.074244 sh[595]: Success Feb 13 19:51:05.088167 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:51:05.125770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:51:05.140702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:51:05.143618 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:51:05.155534 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:51:05.155572 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:05.155583 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:51:05.156597 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:51:05.157407 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:51:05.162882 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:51:05.165045 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:51:05.175290 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:51:05.176825 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:51:05.192093 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:05.192147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:05.192165 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:05.196088 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:05.205804 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:51:05.207928 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:05.219867 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:51:05.228209 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:51:05.288523 ignition[699]: Ignition 2.20.0 Feb 13 19:51:05.288540 ignition[699]: Stage: fetch-offline Feb 13 19:51:05.288594 ignition[699]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:05.288604 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:05.288720 ignition[699]: parsed url from cmdline: "" Feb 13 19:51:05.288727 ignition[699]: no config URL provided Feb 13 19:51:05.288734 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:51:05.288744 ignition[699]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:51:05.288777 ignition[699]: op(1): [started] loading QEMU firmware config module Feb 13 19:51:05.288784 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:51:05.297743 ignition[699]: op(1): [finished] loading QEMU firmware config module Feb 13 19:51:05.299334 ignition[699]: parsing config with SHA512: 9f025396c948df3e35d688f06fd52ec19daab90a9413142d4296931641aaf41830cfba2cc8605c2464a5ad579c5a4217e5939c1ef24aad0e3d6d7e388fb4e33b Feb 13 19:51:05.302010 unknown[699]: fetched base config from "system" Feb 13 19:51:05.302023 unknown[699]: fetched user config from "qemu" Feb 13 19:51:05.302285 ignition[699]: fetch-offline: fetch-offline passed Feb 13 19:51:05.304715 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:05.302352 ignition[699]: Ignition finished successfully Feb 13 19:51:05.310358 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:05.322270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:05.345531 systemd-networkd[785]: lo: Link UP Feb 13 19:51:05.345545 systemd-networkd[785]: lo: Gained carrier Feb 13 19:51:05.347488 systemd-networkd[785]: Enumeration completed Feb 13 19:51:05.347591 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:05.347962 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:05.347967 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:05.348963 systemd-networkd[785]: eth0: Link UP Feb 13 19:51:05.348967 systemd-networkd[785]: eth0: Gained carrier Feb 13 19:51:05.348975 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:05.349824 systemd[1]: Reached target network.target - Network. Feb 13 19:51:05.350372 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:51:05.362432 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:51:05.366117 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:51:05.379639 ignition[787]: Ignition 2.20.0 Feb 13 19:51:05.379654 ignition[787]: Stage: kargs Feb 13 19:51:05.379865 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:05.379880 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:05.380681 ignition[787]: kargs: kargs passed Feb 13 19:51:05.380733 ignition[787]: Ignition finished successfully Feb 13 19:51:05.388922 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:51:05.397473 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:51:05.412142 ignition[795]: Ignition 2.20.0 Feb 13 19:51:05.412154 ignition[795]: Stage: disks Feb 13 19:51:05.412349 ignition[795]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:05.412361 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:05.413026 ignition[795]: disks: disks passed Feb 13 19:51:05.413085 ignition[795]: Ignition finished successfully Feb 13 19:51:05.419715 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:51:05.420684 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:05.422411 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:51:05.424401 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:05.426891 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:05.429039 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:05.442385 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:51:05.459351 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:51:05.515859 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:51:05.527249 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:51:05.620227 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:51:05.620792 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:51:05.621885 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:05.635208 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:05.636482 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:51:05.637586 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:51:05.637630 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:51:05.637653 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:05.644904 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:51:05.648039 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:51:05.655092 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Feb 13 19:51:05.655132 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:05.655147 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:05.656614 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:05.659085 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:05.661265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:05.689409 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:51:05.694819 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:51:05.699763 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:51:05.703973 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:51:05.796201 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:05.804242 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:51:05.807823 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:51:05.814097 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:05.837548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:51:05.855505 ignition[926]: INFO : Ignition 2.20.0 Feb 13 19:51:05.855505 ignition[926]: INFO : Stage: mount Feb 13 19:51:05.857510 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:05.857510 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:05.857510 ignition[926]: INFO : mount: mount passed Feb 13 19:51:05.857510 ignition[926]: INFO : Ignition finished successfully Feb 13 19:51:05.863599 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:51:05.878190 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:51:06.155673 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:51:06.169255 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:51:06.177118 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Feb 13 19:51:06.177161 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:51:06.177179 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:51:06.178630 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:51:06.182093 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:51:06.183428 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:51:06.207761 ignition[957]: INFO : Ignition 2.20.0 Feb 13 19:51:06.207761 ignition[957]: INFO : Stage: files Feb 13 19:51:06.209568 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:06.209568 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:06.211903 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:51:06.213192 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:51:06.213192 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:51:06.217892 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:51:06.219392 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:51:06.221155 unknown[957]: wrote ssh authorized keys file for user: core Feb 13 19:51:06.222236 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:51:06.224601 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:06.226537 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:51:06.228686 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:51:06.669432 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 19:51:06.782549 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:51:07.354309 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:51:07.354309 ignition[957]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:51:07.358343 ignition[957]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:51:07.360708 ignition[957]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:51:07.360708 ignition[957]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:51:07.360708 ignition[957]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:51:07.396182 ignition[957]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:51:07.401602 ignition[957]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:51:07.403489 ignition[957]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:51:07.405194 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:07.407295 ignition[957]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:51:07.419426 ignition[957]: INFO : files: files passed Feb 13 19:51:07.420271 ignition[957]: INFO : Ignition finished successfully Feb 13 19:51:07.423907 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:51:07.437330 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:51:07.441530 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:51:07.453110 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:51:07.453253 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:51:07.474326 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:51:07.480026 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:07.480026 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:07.483584 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:51:07.487396 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:07.488183 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:51:07.497693 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:51:07.548487 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:51:07.548653 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:51:07.552774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:51:07.554870 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:51:07.557266 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:51:07.568280 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:51:07.583135 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:07.593277 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:51:07.603337 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:07.603779 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:07.604144 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:51:07.604611 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:51:07.604755 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:51:07.610125 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:51:07.610622 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:51:07.610964 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:51:07.611483 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:51:07.611822 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:51:07.612335 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:51:07.612660 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:51:07.613015 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:51:07.613558 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:51:07.613837 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:51:07.614324 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:51:07.614439 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:51:07.632626 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:07.632956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:07.633424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:51:07.638756 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:07.639513 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:51:07.639627 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:51:07.645073 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:51:07.645232 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:51:07.645656 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:51:07.648630 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:51:07.653119 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:07.653478 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:51:07.656094 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:51:07.659343 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:51:07.659447 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:51:07.660132 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:51:07.660254 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:51:07.660657 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:51:07.660814 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:51:07.664453 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:51:07.664613 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:51:07.676205 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:51:07.676643 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:51:07.676764 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:07.679230 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:51:07.680602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:51:07.680720 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:07.681022 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:51:07.681133 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:51:07.687273 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:51:07.687428 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:51:07.710493 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:51:07.784437 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 19:51:07.784437 ignition[1011]: INFO : Stage: umount Feb 13 19:51:07.786627 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:51:07.786627 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:51:07.789550 ignition[1011]: INFO : umount: umount passed Feb 13 19:51:07.790534 ignition[1011]: INFO : Ignition finished successfully Feb 13 19:51:07.793809 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:51:07.793941 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:51:07.794653 systemd[1]: Stopped target network.target - Network. Feb 13 19:51:07.798442 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:51:07.799469 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:51:07.801426 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:51:07.801482 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:51:07.804368 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:51:07.804415 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:51:07.806462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:51:07.807392 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:51:07.810608 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:51:07.812919 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:51:07.817113 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 19:51:07.819192 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:51:07.820323 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:51:07.822913 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:51:07.823975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:51:07.827568 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:51:07.828261 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:07.843141 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:51:07.844143 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:51:07.844200 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:51:07.846729 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:51:07.846790 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:07.850974 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:51:07.851022 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:07.853085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:51:07.853134 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:07.855664 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:07.865777 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:51:07.866021 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:07.869152 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:51:07.869282 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:51:07.871852 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:51:07.871928 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:07.873532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:51:07.873573 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:07.875777 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:51:07.875848 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:51:07.878343 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:51:07.878412 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:51:07.880397 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:51:07.880461 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:51:07.883466 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:51:07.885700 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:51:07.885774 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:07.888202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:07.888282 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:07.894846 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:51:07.894989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:51:07.896605 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:51:07.896717 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:51:07.899410 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:51:07.900508 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:51:07.900563 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:51:07.913256 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:51:07.920066 systemd[1]: Switching root. Feb 13 19:51:07.951271 systemd-journald[194]: Journal stopped Feb 13 19:51:09.119050 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:51:09.119162 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:51:09.119184 kernel: SELinux: policy capability open_perms=1 Feb 13 19:51:09.119211 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:51:09.119229 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:51:09.119258 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:51:09.119278 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:51:09.119297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:51:09.119315 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:51:09.119334 kernel: audit: type=1403 audit(1739476268.318:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:51:09.119350 systemd[1]: Successfully loaded SELinux policy in 46.505ms. Feb 13 19:51:09.119378 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.055ms. Feb 13 19:51:09.119394 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:51:09.119411 systemd[1]: Detected virtualization kvm. Feb 13 19:51:09.119427 systemd[1]: Detected architecture x86-64. Feb 13 19:51:09.119444 systemd[1]: Detected first boot. Feb 13 19:51:09.119460 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:51:09.119479 zram_generator::config[1056]: No configuration found. Feb 13 19:51:09.119497 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:51:09.119513 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:51:09.119529 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:51:09.119545 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:51:09.119564 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:51:09.119581 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:51:09.119596 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:51:09.119616 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:51:09.119632 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:51:09.119648 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:51:09.119664 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:51:09.119679 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:51:09.119694 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:51:09.119722 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:51:09.119739 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:51:09.119759 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:51:09.119776 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:51:09.119793 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:51:09.119809 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:51:09.119825 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:51:09.119841 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:51:09.119857 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:51:09.119873 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:51:09.119894 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:51:09.119911 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:51:09.119937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:51:09.119954 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:51:09.119971 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:51:09.119987 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:51:09.120004 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:51:09.120021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:51:09.120037 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:51:09.120111 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:51:09.120131 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:51:09.120147 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:51:09.120164 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:51:09.120181 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:51:09.120197 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:09.120214 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:51:09.120231 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:51:09.120247 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:51:09.120270 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:51:09.120291 systemd[1]: Reached target machines.target - Containers. Feb 13 19:51:09.120312 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:51:09.120333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:09.120354 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:51:09.120373 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:51:09.120394 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:09.120414 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:09.120438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:09.120459 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:51:09.120479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:09.120502 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:51:09.120521 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:51:09.120537 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:51:09.120554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:51:09.120570 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:51:09.120585 kernel: fuse: init (API version 7.39) Feb 13 19:51:09.120605 kernel: loop: module loaded Feb 13 19:51:09.120620 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:51:09.120637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:51:09.120654 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:51:09.120671 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:51:09.120687 kernel: ACPI: bus type drm_connector registered Feb 13 19:51:09.120712 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:51:09.120730 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:51:09.120747 systemd[1]: Stopped verity-setup.service. Feb 13 19:51:09.120790 systemd-journald[1137]: Collecting audit messages is disabled. Feb 13 19:51:09.120820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:09.120837 systemd-journald[1137]: Journal started Feb 13 19:51:09.120883 systemd-journald[1137]: Runtime Journal (/run/log/journal/3592ed6681844ec29799d2a2fac47f41) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:51:08.857867 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:51:08.874118 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:51:08.874576 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:51:09.123633 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:51:09.124430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:51:09.125646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:51:09.126912 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:51:09.128086 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:51:09.129384 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:51:09.130986 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:51:09.132419 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:51:09.133963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:51:09.135651 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:51:09.135936 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:51:09.137599 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:09.137828 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:09.139340 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:09.139544 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:09.141269 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:09.141463 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:09.143070 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:51:09.143265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:51:09.144746 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:09.144953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:09.146408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:51:09.147879 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:51:09.149601 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:51:09.164525 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:51:09.178228 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:51:09.181186 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:51:09.182577 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:51:09.182616 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:51:09.185186 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:51:09.187941 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:51:09.192199 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:51:09.194007 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:09.196616 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:51:09.200407 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:51:09.201896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:09.204237 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:51:09.205756 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:09.208814 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:51:09.222329 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:51:09.227469 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:51:09.233615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:51:09.235237 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:51:09.236609 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:51:09.239109 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:51:09.248107 systemd-journald[1137]: Time spent on flushing to /var/log/journal/3592ed6681844ec29799d2a2fac47f41 is 41.451ms for 1028 entries. Feb 13 19:51:09.248107 systemd-journald[1137]: System Journal (/var/log/journal/3592ed6681844ec29799d2a2fac47f41) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:51:09.304658 systemd-journald[1137]: Received client request to flush runtime journal. Feb 13 19:51:09.304715 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:51:09.304736 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:51:09.257896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:51:09.265883 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:51:09.268365 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:51:09.296304 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:51:09.299005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:51:09.307290 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:51:09.310234 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:51:09.324208 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:51:09.331143 kernel: loop1: detected capacity change from 0 to 140992 Feb 13 19:51:09.333375 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:51:09.336114 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:51:09.336765 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:51:09.364714 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:51:09.365226 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Feb 13 19:51:09.373938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:51:09.448226 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 19:51:09.503100 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:51:09.520117 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 19:51:09.533083 kernel: loop5: detected capacity change from 0 to 218376 Feb 13 19:51:09.539297 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:51:09.540176 (sd-merge)[1194]: Merged extensions into '/usr'. Feb 13 19:51:09.556807 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:51:09.556828 systemd[1]: Reloading... Feb 13 19:51:09.632236 zram_generator::config[1219]: No configuration found. Feb 13 19:51:09.778831 ldconfig[1165]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:51:09.964734 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:10.027079 systemd[1]: Reloading finished in 469 ms. Feb 13 19:51:10.059569 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:51:10.061253 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:51:10.077359 systemd[1]: Starting ensure-sysext.service... Feb 13 19:51:10.080297 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:51:10.087385 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:51:10.087405 systemd[1]: Reloading... Feb 13 19:51:10.150095 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:51:10.150482 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:51:10.153271 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:51:10.153574 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Feb 13 19:51:10.153648 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Feb 13 19:51:10.160237 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:10.160390 systemd-tmpfiles[1258]: Skipping /boot Feb 13 19:51:10.161078 zram_generator::config[1287]: No configuration found. Feb 13 19:51:10.182191 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:51:10.182322 systemd-tmpfiles[1258]: Skipping /boot Feb 13 19:51:10.297389 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:10.347369 systemd[1]: Reloading finished in 259 ms. Feb 13 19:51:10.366941 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:51:10.379875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:51:10.390583 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:10.393400 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:51:10.396173 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:51:10.400937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:51:10.406012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:51:10.410378 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:51:10.417233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:10.417397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:10.422287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:10.428152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:10.432183 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:10.433540 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:10.435478 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:51:10.436628 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Feb 13 19:51:10.437188 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:10.438261 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:51:10.441553 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:10.441804 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:10.443893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:10.444187 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:10.450395 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:10.451829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:10.457968 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:10.459142 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:10.467888 augenrules[1357]: No rules Feb 13 19:51:10.471501 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:51:10.473808 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:51:10.476550 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:10.476793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:10.485918 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:51:10.492767 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:51:10.499097 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:10.508548 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:10.509846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:51:10.512308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:51:10.517796 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:51:10.529488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:51:10.532452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:51:10.534250 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:51:10.537405 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:51:10.538769 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:51:10.539781 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:51:10.541868 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:51:10.543940 augenrules[1380]: /sbin/augenrules: No change Feb 13 19:51:10.544744 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:51:10.545188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:51:10.546853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:51:10.547020 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:51:10.548770 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:51:10.548951 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:51:10.557221 systemd[1]: Finished ensure-sysext.service. Feb 13 19:51:10.558982 augenrules[1413]: No rules Feb 13 19:51:10.558647 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:10.559990 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:10.582875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:51:10.583317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:51:10.585257 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Feb 13 19:51:10.605694 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:51:10.605971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:51:10.606030 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:51:10.615098 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:51:10.616523 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:51:10.627780 systemd-resolved[1326]: Positive Trust Anchors: Feb 13 19:51:10.628131 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:51:10.628228 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:51:10.632916 systemd-resolved[1326]: Defaulting to hostname 'linux'. Feb 13 19:51:10.634854 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:51:10.637033 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:51:10.666080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:51:10.681078 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:51:10.691930 systemd-networkd[1401]: lo: Link UP Feb 13 19:51:10.692245 systemd-networkd[1401]: lo: Gained carrier Feb 13 19:51:10.699189 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:51:10.714228 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:51:10.717864 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:51:10.729887 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:51:10.747966 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:51:10.718936 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:51:10.720630 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:51:10.722405 systemd-networkd[1401]: Enumeration completed Feb 13 19:51:10.722486 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:51:10.723841 systemd[1]: Reached target network.target - Network. Feb 13 19:51:10.725954 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:10.725959 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:51:10.727704 systemd-networkd[1401]: eth0: Link UP Feb 13 19:51:10.727708 systemd-networkd[1401]: eth0: Gained carrier Feb 13 19:51:10.727721 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:51:10.743125 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:51:10.747249 systemd-timesyncd[1428]: Network configuration changed, trying to establish connection. Feb 13 19:51:11.867566 systemd-timesyncd[1428]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:51:11.867623 systemd-timesyncd[1428]: Initial clock synchronization to Thu 2025-02-13 19:51:11.867460 UTC. Feb 13 19:51:11.870683 systemd-resolved[1326]: Clock change detected. Flushing caches. Feb 13 19:51:11.873777 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:51:11.932793 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:11.939235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:51:11.947169 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:51:11.947642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:51:11.958670 kernel: kvm_amd: TSC scaling supported Feb 13 19:51:11.958706 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:51:11.958719 kernel: kvm_amd: Nested Paging enabled Feb 13 19:51:11.958732 kernel: kvm_amd: LBR virtualization supported Feb 13 19:51:11.959751 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:51:11.959778 kernel: kvm_amd: Virtual GIF supported Feb 13 19:51:11.974112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:51:11.974459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:11.984536 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:51:11.990692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:51:11.992513 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:51:12.032850 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:51:12.044776 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:51:12.057406 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:12.057404 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:51:12.097232 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:51:12.098915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:51:12.100470 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:51:12.101960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:51:12.103434 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:51:12.105076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:51:12.106353 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:51:12.107868 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:51:12.109370 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:51:12.109415 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:51:12.110442 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:51:12.112763 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:51:12.116625 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:51:12.127736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:51:12.130733 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:51:12.132407 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:51:12.133651 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:51:12.134698 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:51:12.135742 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:12.135769 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:51:12.149592 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:51:12.156244 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:51:12.166853 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:51:12.169278 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:51:12.174190 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:51:12.174802 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:51:12.177222 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:51:12.180095 jq[1460]: false Feb 13 19:51:12.181571 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:51:12.185653 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:51:12.190584 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:51:12.193030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:51:12.193778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:51:12.194898 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:51:12.198611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:51:12.202582 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:51:12.207057 jq[1468]: true Feb 13 19:51:12.208089 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:51:12.208332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:51:12.208753 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:51:12.208963 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:51:12.212240 extend-filesystems[1461]: Found loop3 Feb 13 19:51:12.212240 extend-filesystems[1461]: Found loop4 Feb 13 19:51:12.212240 extend-filesystems[1461]: Found loop5 Feb 13 19:51:12.212240 extend-filesystems[1461]: Found sr0 Feb 13 19:51:12.212240 extend-filesystems[1461]: Found vda Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda1 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda2 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda3 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found usr Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda4 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda6 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda7 Feb 13 19:51:12.225174 extend-filesystems[1461]: Found vda9 Feb 13 19:51:12.225174 extend-filesystems[1461]: Checking size of /dev/vda9 Feb 13 19:51:12.217080 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:51:12.216436 dbus-daemon[1459]: [system] SELinux support is enabled Feb 13 19:51:12.247743 update_engine[1467]: I20250213 19:51:12.238757 1467 main.cc:92] Flatcar Update Engine starting Feb 13 19:51:12.247743 update_engine[1467]: I20250213 19:51:12.240306 1467 update_check_scheduler.cc:74] Next update check in 9m57s Feb 13 19:51:12.222902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:51:12.222964 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:51:12.225044 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:51:12.225080 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:51:12.235857 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:51:12.236125 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:51:12.240226 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:51:12.250525 (ntainerd)[1485]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:51:12.250890 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:51:12.256162 jq[1473]: true Feb 13 19:51:12.257739 extend-filesystems[1461]: Resized partition /dev/vda9 Feb 13 19:51:12.272593 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:51:12.279742 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1367) Feb 13 19:51:12.289836 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:51:12.324226 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:51:12.340215 locksmithd[1490]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:51:12.350224 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:51:12.350224 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:51:12.350224 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:51:12.358274 extend-filesystems[1461]: Resized filesystem in /dev/vda9 Feb 13 19:51:12.351548 systemd-logind[1466]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:51:12.351573 systemd-logind[1466]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:51:12.352148 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:51:12.352413 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:51:12.352629 systemd-logind[1466]: New seat seat0. Feb 13 19:51:12.359654 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:51:12.370569 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:51:12.373951 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:51:12.376733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:51:12.380206 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:51:12.432608 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:51:12.463974 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:51:12.512986 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:51:12.513262 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:51:12.516721 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:51:12.536137 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:51:12.554652 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:51:12.557115 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:51:12.558779 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:51:12.634405 containerd[1485]: time="2025-02-13T19:51:12.634293219Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:51:12.657702 containerd[1485]: time="2025-02-13T19:51:12.657625558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.659774 containerd[1485]: time="2025-02-13T19:51:12.659732338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:12.659774 containerd[1485]: time="2025-02-13T19:51:12.659767273Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:51:12.659828 containerd[1485]: time="2025-02-13T19:51:12.659784446Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:51:12.660025 containerd[1485]: time="2025-02-13T19:51:12.659997525Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:51:12.660025 containerd[1485]: time="2025-02-13T19:51:12.660019025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660119 containerd[1485]: time="2025-02-13T19:51:12.660096030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660141 containerd[1485]: time="2025-02-13T19:51:12.660117430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660419 containerd[1485]: time="2025-02-13T19:51:12.660368430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660443 containerd[1485]: time="2025-02-13T19:51:12.660411742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660463 containerd[1485]: time="2025-02-13T19:51:12.660436739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660492 containerd[1485]: time="2025-02-13T19:51:12.660461735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660614 containerd[1485]: time="2025-02-13T19:51:12.660585588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.660902 containerd[1485]: time="2025-02-13T19:51:12.660873017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:51:12.661071 containerd[1485]: time="2025-02-13T19:51:12.661040691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:51:12.661071 containerd[1485]: time="2025-02-13T19:51:12.661063945Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:51:12.661215 containerd[1485]: time="2025-02-13T19:51:12.661188588Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:51:12.661294 containerd[1485]: time="2025-02-13T19:51:12.661268628Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:51:12.667698 containerd[1485]: time="2025-02-13T19:51:12.667652999Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:51:12.667739 containerd[1485]: time="2025-02-13T19:51:12.667716368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:51:12.667739 containerd[1485]: time="2025-02-13T19:51:12.667735474Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:51:12.667793 containerd[1485]: time="2025-02-13T19:51:12.667754309Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:51:12.667793 containerd[1485]: time="2025-02-13T19:51:12.667768275Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:51:12.667968 containerd[1485]: time="2025-02-13T19:51:12.667931782Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:51:12.668314 containerd[1485]: time="2025-02-13T19:51:12.668264135Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:51:12.668457 containerd[1485]: time="2025-02-13T19:51:12.668437821Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:51:12.668481 containerd[1485]: time="2025-02-13T19:51:12.668460333Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:51:12.668481 containerd[1485]: time="2025-02-13T19:51:12.668476173Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:51:12.668519 containerd[1485]: time="2025-02-13T19:51:12.668495188Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668519 containerd[1485]: time="2025-02-13T19:51:12.668510958Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668555 containerd[1485]: time="2025-02-13T19:51:12.668523331Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668555 containerd[1485]: time="2025-02-13T19:51:12.668537718Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668555 containerd[1485]: time="2025-02-13T19:51:12.668551894Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668626 containerd[1485]: time="2025-02-13T19:51:12.668565209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668626 containerd[1485]: time="2025-02-13T19:51:12.668578194Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668626 containerd[1485]: time="2025-02-13T19:51:12.668590006Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:51:12.668626 containerd[1485]: time="2025-02-13T19:51:12.668612117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668626 containerd[1485]: time="2025-02-13T19:51:12.668626084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668639128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668652463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668664446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668677270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668689322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668701455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668717 containerd[1485]: time="2025-02-13T19:51:12.668714369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668729317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668741109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668752230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668764373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668786575Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668806322Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668819496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668844 containerd[1485]: time="2025-02-13T19:51:12.668829936Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668880400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668897543Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668908633Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668921027Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668929613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668941174Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668951303Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:51:12.668991 containerd[1485]: time="2025-02-13T19:51:12.668969177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:51:12.669305 containerd[1485]: time="2025-02-13T19:51:12.669255213Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:51:12.669305 containerd[1485]: time="2025-02-13T19:51:12.669301179Z" level=info msg="Connect containerd service" Feb 13 19:51:12.669470 containerd[1485]: time="2025-02-13T19:51:12.669334231Z" level=info msg="using legacy CRI server" Feb 13 19:51:12.669470 containerd[1485]: time="2025-02-13T19:51:12.669341836Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:51:12.669470 containerd[1485]: time="2025-02-13T19:51:12.669460889Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:51:12.670707 containerd[1485]: time="2025-02-13T19:51:12.670667211Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:51:12.670785 containerd[1485]: time="2025-02-13T19:51:12.670756288Z" level=info msg="Start subscribing containerd event" Feb 13 19:51:12.671043 containerd[1485]: time="2025-02-13T19:51:12.670799980Z" level=info msg="Start recovering state" Feb 13 19:51:12.671164 containerd[1485]: time="2025-02-13T19:51:12.671140328Z" level=info msg="Start event monitor" Feb 13 19:51:12.671201 containerd[1485]: time="2025-02-13T19:51:12.671173530Z" level=info msg="Start snapshots syncer" Feb 13 19:51:12.671201 containerd[1485]: time="2025-02-13T19:51:12.671189059Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:51:12.671370 containerd[1485]: time="2025-02-13T19:51:12.671200631Z" level=info msg="Start streaming server" Feb 13 19:51:12.671370 containerd[1485]: time="2025-02-13T19:51:12.671354740Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:51:12.671459 containerd[1485]: time="2025-02-13T19:51:12.671437445Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:51:12.671610 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:51:12.672885 containerd[1485]: time="2025-02-13T19:51:12.672849362Z" level=info msg="containerd successfully booted in 0.041798s" Feb 13 19:51:13.290653 systemd-networkd[1401]: eth0: Gained IPv6LL Feb 13 19:51:13.293950 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:51:13.295936 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:51:13.309635 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:51:13.312342 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:13.314744 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:51:13.336126 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:51:13.336378 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:51:13.338245 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:51:13.340764 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:51:14.471692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:14.473675 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:51:14.475159 systemd[1]: Startup finished in 806ms (kernel) + 5.594s (initrd) + 5.084s (userspace) = 11.485s. Feb 13 19:51:14.477285 (kubelet)[1566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:51:15.334454 kubelet[1566]: E0213 19:51:15.334354 1566 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:51:15.339616 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:51:15.339902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:51:15.340437 systemd[1]: kubelet.service: Consumed 1.871s CPU time. Feb 13 19:51:22.010700 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:51:22.012163 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:52834.service - OpenSSH per-connection server daemon (10.0.0.1:52834). Feb 13 19:51:22.077088 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 52834 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.079133 sshd-session[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.091857 systemd-logind[1466]: New session 1 of user core. Feb 13 19:51:22.093409 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:51:22.103804 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:51:22.118956 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:51:22.132808 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:51:22.136132 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:51:22.242132 systemd[1583]: Queued start job for default target default.target. Feb 13 19:51:22.259051 systemd[1583]: Created slice app.slice - User Application Slice. Feb 13 19:51:22.259084 systemd[1583]: Reached target paths.target - Paths. Feb 13 19:51:22.259103 systemd[1583]: Reached target timers.target - Timers. Feb 13 19:51:22.261166 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:51:22.273515 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:51:22.273699 systemd[1583]: Reached target sockets.target - Sockets. Feb 13 19:51:22.273727 systemd[1583]: Reached target basic.target - Basic System. Feb 13 19:51:22.273777 systemd[1583]: Reached target default.target - Main User Target. Feb 13 19:51:22.273819 systemd[1583]: Startup finished in 130ms. Feb 13 19:51:22.274221 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:51:22.276031 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:51:22.343457 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:52836.service - OpenSSH per-connection server daemon (10.0.0.1:52836). Feb 13 19:51:22.390311 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 52836 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.391891 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.396360 systemd-logind[1466]: New session 2 of user core. Feb 13 19:51:22.411520 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:51:22.467013 sshd[1596]: Connection closed by 10.0.0.1 port 52836 Feb 13 19:51:22.467367 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:22.480148 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:52836.service: Deactivated successfully. Feb 13 19:51:22.481765 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:51:22.483094 systemd-logind[1466]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:51:22.489619 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Feb 13 19:51:22.490567 systemd-logind[1466]: Removed session 2. Feb 13 19:51:22.526261 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.527785 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.532528 systemd-logind[1466]: New session 3 of user core. Feb 13 19:51:22.546558 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:51:22.598505 sshd[1603]: Connection closed by 10.0.0.1 port 52844 Feb 13 19:51:22.598887 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:22.614540 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:52844.service: Deactivated successfully. Feb 13 19:51:22.616694 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:51:22.618637 systemd-logind[1466]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:51:22.629746 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:52846.service - OpenSSH per-connection server daemon (10.0.0.1:52846). Feb 13 19:51:22.630824 systemd-logind[1466]: Removed session 3. Feb 13 19:51:22.669185 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 52846 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.671257 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.675836 systemd-logind[1466]: New session 4 of user core. Feb 13 19:51:22.690810 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:51:22.748089 sshd[1610]: Connection closed by 10.0.0.1 port 52846 Feb 13 19:51:22.748615 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:22.760033 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:52846.service: Deactivated successfully. Feb 13 19:51:22.761930 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:51:22.763565 systemd-logind[1466]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:51:22.774770 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Feb 13 19:51:22.775970 systemd-logind[1466]: Removed session 4. Feb 13 19:51:22.814300 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.816179 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.821133 systemd-logind[1466]: New session 5 of user core. Feb 13 19:51:22.835783 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:51:22.896956 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:51:22.897296 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:22.917696 sudo[1618]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:22.919745 sshd[1617]: Connection closed by 10.0.0.1 port 52862 Feb 13 19:51:22.920247 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:22.936660 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:52862.service: Deactivated successfully. Feb 13 19:51:22.938416 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:51:22.939869 systemd-logind[1466]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:51:22.941340 systemd[1]: Started sshd@5-10.0.0.110:22-10.0.0.1:52876.service - OpenSSH per-connection server daemon (10.0.0.1:52876). Feb 13 19:51:22.942130 systemd-logind[1466]: Removed session 5. Feb 13 19:51:22.985063 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 52876 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:22.986838 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:22.991360 systemd-logind[1466]: New session 6 of user core. Feb 13 19:51:23.009643 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:51:23.065482 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:51:23.065898 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:23.069989 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:23.076077 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:51:23.076415 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:23.096949 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:51:23.130489 augenrules[1649]: No rules Feb 13 19:51:23.132526 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:51:23.132783 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:51:23.134190 sudo[1626]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:23.135766 sshd[1625]: Connection closed by 10.0.0.1 port 52876 Feb 13 19:51:23.136194 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:23.143412 systemd[1]: sshd@5-10.0.0.110:22-10.0.0.1:52876.service: Deactivated successfully. Feb 13 19:51:23.145201 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:51:23.145905 systemd-logind[1466]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:51:23.158697 systemd[1]: Started sshd@6-10.0.0.110:22-10.0.0.1:52884.service - OpenSSH per-connection server daemon (10.0.0.1:52884). Feb 13 19:51:23.159827 systemd-logind[1466]: Removed session 6. Feb 13 19:51:23.198918 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 52884 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:51:23.200511 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:51:23.204920 systemd-logind[1466]: New session 7 of user core. Feb 13 19:51:23.214513 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:51:23.267880 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:51:23.268214 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:51:23.295796 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:51:23.315683 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:51:23.316000 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:51:25.108458 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:25.108676 systemd[1]: kubelet.service: Consumed 1.871s CPU time. Feb 13 19:51:25.119824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:25.151852 systemd[1]: Reloading requested from client PID 1701 ('systemctl') (unit session-7.scope)... Feb 13 19:51:25.151878 systemd[1]: Reloading... Feb 13 19:51:25.249419 zram_generator::config[1739]: No configuration found. Feb 13 19:51:25.528483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:51:25.611144 systemd[1]: Reloading finished in 458 ms. Feb 13 19:51:25.667132 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:51:25.667251 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:51:25.667631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:25.671113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:51:25.860727 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:51:25.867335 (kubelet)[1788]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:51:25.925580 kubelet[1788]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:25.925580 kubelet[1788]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:51:25.925580 kubelet[1788]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:51:25.926048 kubelet[1788]: I0213 19:51:25.925663 1788 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:51:26.138279 kubelet[1788]: I0213 19:51:26.138139 1788 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:51:26.138279 kubelet[1788]: I0213 19:51:26.138179 1788 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:51:26.138528 kubelet[1788]: I0213 19:51:26.138499 1788 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:51:26.165080 kubelet[1788]: I0213 19:51:26.164831 1788 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:51:26.184139 kubelet[1788]: E0213 19:51:26.184059 1788 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:51:26.184139 kubelet[1788]: I0213 19:51:26.184107 1788 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:51:26.191106 kubelet[1788]: I0213 19:51:26.191041 1788 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:51:26.193737 kubelet[1788]: I0213 19:51:26.193616 1788 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:51:26.193999 kubelet[1788]: I0213 19:51:26.193723 1788 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.110","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:51:26.194138 kubelet[1788]: I0213 19:51:26.194010 1788 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:51:26.194138 kubelet[1788]: I0213 19:51:26.194025 1788 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:51:26.194284 kubelet[1788]: I0213 19:51:26.194258 1788 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:26.199479 kubelet[1788]: I0213 19:51:26.199352 1788 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:51:26.199479 kubelet[1788]: I0213 19:51:26.199409 1788 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:51:26.199479 kubelet[1788]: I0213 19:51:26.199439 1788 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:51:26.199479 kubelet[1788]: I0213 19:51:26.199461 1788 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:51:26.202282 kubelet[1788]: E0213 19:51:26.202125 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.202282 kubelet[1788]: E0213 19:51:26.202184 1788 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:26.203876 kubelet[1788]: I0213 19:51:26.203822 1788 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:51:26.204399 kubelet[1788]: I0213 19:51:26.204350 1788 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:51:26.205074 kubelet[1788]: W0213 19:51:26.205026 1788 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:51:26.209552 kubelet[1788]: I0213 19:51:26.209042 1788 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:51:26.209552 kubelet[1788]: I0213 19:51:26.209150 1788 server.go:1287] "Started kubelet" Feb 13 19:51:26.209552 kubelet[1788]: W0213 19:51:26.209202 1788 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.110" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:51:26.210499 kubelet[1788]: I0213 19:51:26.209829 1788 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:51:26.210499 kubelet[1788]: I0213 19:51:26.210455 1788 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:51:26.210610 kubelet[1788]: I0213 19:51:26.210593 1788 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:51:26.211985 kubelet[1788]: I0213 19:51:26.211485 1788 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:51:26.213167 kubelet[1788]: E0213 19:51:26.209272 1788 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.110\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:51:26.213167 kubelet[1788]: W0213 19:51:26.209201 1788 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:51:26.213307 kubelet[1788]: E0213 19:51:26.213194 1788 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:51:26.215140 kubelet[1788]: I0213 19:51:26.214691 1788 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:51:26.216820 kubelet[1788]: I0213 19:51:26.215980 1788 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:51:26.217465 kubelet[1788]: E0213 19:51:26.217438 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.217513 kubelet[1788]: I0213 19:51:26.217481 1788 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:51:26.220364 kubelet[1788]: I0213 19:51:26.220326 1788 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:51:26.220476 kubelet[1788]: I0213 19:51:26.220453 1788 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:51:26.222278 kubelet[1788]: E0213 19:51:26.222220 1788 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.110\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:51:26.223289 kubelet[1788]: I0213 19:51:26.223254 1788 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:51:26.223468 kubelet[1788]: I0213 19:51:26.223420 1788 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:51:26.224184 kubelet[1788]: E0213 19:51:26.224149 1788 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:51:26.225530 kubelet[1788]: I0213 19:51:26.225509 1788 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:51:26.304768 kubelet[1788]: W0213 19:51:26.304709 1788 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:51:26.304768 kubelet[1788]: E0213 19:51:26.304764 1788 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:51:26.306491 kubelet[1788]: E0213 19:51:26.302322 1788 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.110.1823dc7756b5a6d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.110,UID:10.0.0.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.110,},FirstTimestamp:2025-02-13 19:51:26.209107673 +0000 UTC m=+0.336768562,LastTimestamp:2025-02-13 19:51:26.209107673 +0000 UTC m=+0.336768562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.110,}" Feb 13 19:51:26.309315 kubelet[1788]: I0213 19:51:26.309295 1788 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:51:26.309822 kubelet[1788]: I0213 19:51:26.309448 1788 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:51:26.309822 kubelet[1788]: I0213 19:51:26.309489 1788 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:51:26.314400 kubelet[1788]: E0213 19:51:26.314242 1788 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.110.1823dc77579abe28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.110,UID:10.0.0.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.110,},FirstTimestamp:2025-02-13 19:51:26.224121384 +0000 UTC m=+0.351782263,LastTimestamp:2025-02-13 19:51:26.224121384 +0000 UTC m=+0.351782263,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.110,}" Feb 13 19:51:26.317691 kubelet[1788]: E0213 19:51:26.317635 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.321650 kubelet[1788]: E0213 19:51:26.321518 1788 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.110.1823dc775ca3c5c8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.110,UID:10.0.0.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.110 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.110,},FirstTimestamp:2025-02-13 19:51:26.30859924 +0000 UTC m=+0.436260119,LastTimestamp:2025-02-13 19:51:26.30859924 +0000 UTC m=+0.436260119,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.110,}" Feb 13 19:51:26.327094 kubelet[1788]: E0213 19:51:26.326954 1788 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.110.1823dc775ca3f8ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.110,UID:10.0.0.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.110 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.110,},FirstTimestamp:2025-02-13 19:51:26.308612334 +0000 UTC m=+0.436273213,LastTimestamp:2025-02-13 19:51:26.308612334 +0000 UTC m=+0.436273213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.110,}" Feb 13 19:51:26.331847 kubelet[1788]: E0213 19:51:26.331736 1788 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.110.1823dc775ca402d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.110,UID:10.0.0.110,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.110 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.110,},FirstTimestamp:2025-02-13 19:51:26.308614869 +0000 UTC m=+0.436275748,LastTimestamp:2025-02-13 19:51:26.308614869 +0000 UTC m=+0.436275748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.110,}" Feb 13 19:51:26.418028 kubelet[1788]: E0213 19:51:26.417781 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.429215 kubelet[1788]: E0213 19:51:26.429109 1788 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.110\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 13 19:51:26.518651 kubelet[1788]: E0213 19:51:26.518548 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.619535 kubelet[1788]: E0213 19:51:26.619443 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.720160 kubelet[1788]: E0213 19:51:26.719994 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:26.759265 kubelet[1788]: I0213 19:51:26.759193 1788 policy_none.go:49] "None policy: Start" Feb 13 19:51:26.759265 kubelet[1788]: I0213 19:51:26.759268 1788 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:51:26.759490 kubelet[1788]: I0213 19:51:26.759291 1788 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:51:26.772708 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:51:26.788471 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:51:26.793012 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:51:26.802308 kubelet[1788]: I0213 19:51:26.802218 1788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:51:26.802855 kubelet[1788]: I0213 19:51:26.802828 1788 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:51:26.803474 kubelet[1788]: I0213 19:51:26.803112 1788 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:51:26.803474 kubelet[1788]: I0213 19:51:26.803140 1788 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:51:26.803626 kubelet[1788]: I0213 19:51:26.803606 1788 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:51:26.804992 kubelet[1788]: I0213 19:51:26.804960 1788 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:51:26.805129 kubelet[1788]: I0213 19:51:26.805116 1788 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:51:26.805234 kubelet[1788]: I0213 19:51:26.805220 1788 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:51:26.805327 kubelet[1788]: I0213 19:51:26.805314 1788 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:51:26.807321 kubelet[1788]: E0213 19:51:26.805656 1788 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:51:26.807321 kubelet[1788]: E0213 19:51:26.805246 1788 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:51:26.807321 kubelet[1788]: E0213 19:51:26.805724 1788 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.110\" not found" Feb 13 19:51:26.836023 kubelet[1788]: E0213 19:51:26.835953 1788 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.110\" not found" node="10.0.0.110" Feb 13 19:51:26.905182 kubelet[1788]: I0213 19:51:26.905127 1788 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.110" Feb 13 19:51:26.916827 kubelet[1788]: I0213 19:51:26.916772 1788 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.110" Feb 13 19:51:26.916827 kubelet[1788]: E0213 19:51:26.916813 1788 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.110\": node \"10.0.0.110\" not found" Feb 13 19:51:26.923571 kubelet[1788]: E0213 19:51:26.923521 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:27.024441 kubelet[1788]: E0213 19:51:27.024224 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:27.125419 kubelet[1788]: E0213 19:51:27.125329 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:27.141205 kubelet[1788]: I0213 19:51:27.140848 1788 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:51:27.141205 kubelet[1788]: W0213 19:51:27.141126 1788 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:51:27.202717 kubelet[1788]: E0213 19:51:27.202631 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:27.225904 kubelet[1788]: E0213 19:51:27.225805 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:27.326973 kubelet[1788]: E0213 19:51:27.326814 1788 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" Feb 13 19:51:27.428690 kubelet[1788]: I0213 19:51:27.428641 1788 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:51:27.429202 containerd[1485]: time="2025-02-13T19:51:27.429126899Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:51:27.429634 kubelet[1788]: I0213 19:51:27.429446 1788 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:51:27.489026 sudo[1660]: pam_unix(sudo:session): session closed for user root Feb 13 19:51:27.490472 sshd[1659]: Connection closed by 10.0.0.1 port 52884 Feb 13 19:51:27.490868 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Feb 13 19:51:27.495356 systemd[1]: sshd@6-10.0.0.110:22-10.0.0.1:52884.service: Deactivated successfully. Feb 13 19:51:27.497436 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:51:27.498077 systemd-logind[1466]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:51:27.499153 systemd-logind[1466]: Removed session 7. Feb 13 19:51:28.203012 kubelet[1788]: E0213 19:51:28.202947 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:28.203012 kubelet[1788]: I0213 19:51:28.202979 1788 apiserver.go:52] "Watching apiserver" Feb 13 19:51:28.210414 kubelet[1788]: E0213 19:51:28.208222 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:28.219897 systemd[1]: Created slice kubepods-besteffort-pod6abf4b9e_9e1a_402b_bde8_3c9dbf9ed66d.slice - libcontainer container kubepods-besteffort-pod6abf4b9e_9e1a_402b_bde8_3c9dbf9ed66d.slice. Feb 13 19:51:28.222536 kubelet[1788]: I0213 19:51:28.222492 1788 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:51:28.230715 systemd[1]: Created slice kubepods-besteffort-pod12d5d2dc_b6a2_4336_acfb_6c3fb069a505.slice - libcontainer container kubepods-besteffort-pod12d5d2dc_b6a2_4336_acfb_6c3fb069a505.slice. Feb 13 19:51:28.234164 kubelet[1788]: I0213 19:51:28.233889 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc2k4\" (UniqueName: \"kubernetes.io/projected/12d5d2dc-b6a2-4336-acfb-6c3fb069a505-kube-api-access-jc2k4\") pod \"kube-proxy-l5bhd\" (UID: \"12d5d2dc-b6a2-4336-acfb-6c3fb069a505\") " pod="kube-system/kube-proxy-l5bhd" Feb 13 19:51:28.234164 kubelet[1788]: I0213 19:51:28.233948 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-tigera-ca-bundle\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234164 kubelet[1788]: I0213 19:51:28.233980 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-var-run-calico\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234164 kubelet[1788]: I0213 19:51:28.234005 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-flexvol-driver-host\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234164 kubelet[1788]: I0213 19:51:28.234033 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c3950c8a-700c-4c8b-8e8b-c3137c3cc22f-kubelet-dir\") pod \"csi-node-driver-h8s4b\" (UID: \"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f\") " pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:28.234513 kubelet[1788]: I0213 19:51:28.234057 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d5d2dc-b6a2-4336-acfb-6c3fb069a505-xtables-lock\") pod \"kube-proxy-l5bhd\" (UID: \"12d5d2dc-b6a2-4336-acfb-6c3fb069a505\") " pod="kube-system/kube-proxy-l5bhd" Feb 13 19:51:28.234513 kubelet[1788]: I0213 19:51:28.234082 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-cni-bin-dir\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234513 kubelet[1788]: I0213 19:51:28.234107 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-cni-net-dir\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234513 kubelet[1788]: I0213 19:51:28.234165 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-cni-log-dir\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234513 kubelet[1788]: I0213 19:51:28.234231 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c3950c8a-700c-4c8b-8e8b-c3137c3cc22f-varrun\") pod \"csi-node-driver-h8s4b\" (UID: \"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f\") " pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:28.234761 kubelet[1788]: I0213 19:51:28.234261 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c3950c8a-700c-4c8b-8e8b-c3137c3cc22f-registration-dir\") pod \"csi-node-driver-h8s4b\" (UID: \"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f\") " pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:28.234761 kubelet[1788]: I0213 19:51:28.234287 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-lib-modules\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234761 kubelet[1788]: I0213 19:51:28.234311 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-policysync\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234761 kubelet[1788]: I0213 19:51:28.234346 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-var-lib-calico\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234761 kubelet[1788]: I0213 19:51:28.234379 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c3950c8a-700c-4c8b-8e8b-c3137c3cc22f-socket-dir\") pod \"csi-node-driver-h8s4b\" (UID: \"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f\") " pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:28.234916 kubelet[1788]: I0213 19:51:28.234416 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/12d5d2dc-b6a2-4336-acfb-6c3fb069a505-kube-proxy\") pod \"kube-proxy-l5bhd\" (UID: \"12d5d2dc-b6a2-4336-acfb-6c3fb069a505\") " pod="kube-system/kube-proxy-l5bhd" Feb 13 19:51:28.234916 kubelet[1788]: I0213 19:51:28.234438 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-xtables-lock\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234916 kubelet[1788]: I0213 19:51:28.234533 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-node-certs\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234916 kubelet[1788]: I0213 19:51:28.234612 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ddtr\" (UniqueName: \"kubernetes.io/projected/6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d-kube-api-access-5ddtr\") pod \"calico-node-vcj9q\" (UID: \"6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d\") " pod="calico-system/calico-node-vcj9q" Feb 13 19:51:28.234916 kubelet[1788]: I0213 19:51:28.234651 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshdl\" (UniqueName: \"kubernetes.io/projected/c3950c8a-700c-4c8b-8e8b-c3137c3cc22f-kube-api-access-sshdl\") pod \"csi-node-driver-h8s4b\" (UID: \"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f\") " pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:28.235065 kubelet[1788]: I0213 19:51:28.234677 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d5d2dc-b6a2-4336-acfb-6c3fb069a505-lib-modules\") pod \"kube-proxy-l5bhd\" (UID: \"12d5d2dc-b6a2-4336-acfb-6c3fb069a505\") " pod="kube-system/kube-proxy-l5bhd" Feb 13 19:51:28.341303 kubelet[1788]: E0213 19:51:28.341239 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:28.341303 kubelet[1788]: W0213 19:51:28.341276 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:28.341303 kubelet[1788]: E0213 19:51:28.341313 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:28.390942 kubelet[1788]: E0213 19:51:28.390885 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:28.390942 kubelet[1788]: W0213 19:51:28.390927 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:28.391134 kubelet[1788]: E0213 19:51:28.390966 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:28.391484 kubelet[1788]: E0213 19:51:28.391334 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:28.391484 kubelet[1788]: W0213 19:51:28.391372 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:28.391484 kubelet[1788]: E0213 19:51:28.391433 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:28.391799 kubelet[1788]: E0213 19:51:28.391777 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:28.391799 kubelet[1788]: W0213 19:51:28.391796 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:28.391873 kubelet[1788]: E0213 19:51:28.391814 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:28.527519 kubelet[1788]: E0213 19:51:28.527305 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:28.528481 containerd[1485]: time="2025-02-13T19:51:28.528333371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcj9q,Uid:6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d,Namespace:calico-system,Attempt:0,}" Feb 13 19:51:28.535164 kubelet[1788]: E0213 19:51:28.535109 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:28.535801 containerd[1485]: time="2025-02-13T19:51:28.535747282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5bhd,Uid:12d5d2dc-b6a2-4336-acfb-6c3fb069a505,Namespace:kube-system,Attempt:0,}" Feb 13 19:51:29.203879 kubelet[1788]: E0213 19:51:29.203806 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:29.316262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423826645.mount: Deactivated successfully. Feb 13 19:51:29.336176 containerd[1485]: time="2025-02-13T19:51:29.335970199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:29.338037 containerd[1485]: time="2025-02-13T19:51:29.337960391Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:29.339286 containerd[1485]: time="2025-02-13T19:51:29.339213260Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:51:29.340678 containerd[1485]: time="2025-02-13T19:51:29.340580133Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:51:29.342769 containerd[1485]: time="2025-02-13T19:51:29.342698164Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:29.347047 containerd[1485]: time="2025-02-13T19:51:29.346977989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:51:29.348133 containerd[1485]: time="2025-02-13T19:51:29.348049298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 819.461289ms" Feb 13 19:51:29.351552 containerd[1485]: time="2025-02-13T19:51:29.351424426Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 815.573721ms" Feb 13 19:51:29.553915 containerd[1485]: time="2025-02-13T19:51:29.553522029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:29.553915 containerd[1485]: time="2025-02-13T19:51:29.553608361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:29.553915 containerd[1485]: time="2025-02-13T19:51:29.553626795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:29.553915 containerd[1485]: time="2025-02-13T19:51:29.553746871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:29.555491 containerd[1485]: time="2025-02-13T19:51:29.552590943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:29.555491 containerd[1485]: time="2025-02-13T19:51:29.555147537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:29.555491 containerd[1485]: time="2025-02-13T19:51:29.555160722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:29.555491 containerd[1485]: time="2025-02-13T19:51:29.555247955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:29.673449 systemd[1]: Started cri-containerd-9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2.scope - libcontainer container 9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2. Feb 13 19:51:29.681868 systemd[1]: Started cri-containerd-5d016089fe1aae3990e7a83081e130020776c6822d15a14a8d1393245754b0b8.scope - libcontainer container 5d016089fe1aae3990e7a83081e130020776c6822d15a14a8d1393245754b0b8. Feb 13 19:51:29.722547 containerd[1485]: time="2025-02-13T19:51:29.722346765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l5bhd,Uid:12d5d2dc-b6a2-4336-acfb-6c3fb069a505,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d016089fe1aae3990e7a83081e130020776c6822d15a14a8d1393245754b0b8\"" Feb 13 19:51:29.724088 kubelet[1788]: E0213 19:51:29.724059 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:29.726508 containerd[1485]: time="2025-02-13T19:51:29.726467702Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:51:29.727292 containerd[1485]: time="2025-02-13T19:51:29.727253255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vcj9q,Uid:6abf4b9e-9e1a-402b-bde8-3c9dbf9ed66d,Namespace:calico-system,Attempt:0,} returns sandbox id \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\"" Feb 13 19:51:29.728004 kubelet[1788]: E0213 19:51:29.727981 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:29.806536 kubelet[1788]: E0213 19:51:29.806260 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:30.204125 kubelet[1788]: E0213 19:51:30.203972 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:31.175628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3292700465.mount: Deactivated successfully. Feb 13 19:51:31.204594 kubelet[1788]: E0213 19:51:31.204505 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:31.806162 kubelet[1788]: E0213 19:51:31.805773 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:31.922997 containerd[1485]: time="2025-02-13T19:51:31.922915651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.923824 containerd[1485]: time="2025-02-13T19:51:31.923734827Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:51:31.926420 containerd[1485]: time="2025-02-13T19:51:31.925843732Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.931153 containerd[1485]: time="2025-02-13T19:51:31.931096270Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:31.931870 containerd[1485]: time="2025-02-13T19:51:31.931823373Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.205288906s" Feb 13 19:51:31.931904 containerd[1485]: time="2025-02-13T19:51:31.931866825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:51:31.933396 containerd[1485]: time="2025-02-13T19:51:31.933120465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:51:31.935088 containerd[1485]: time="2025-02-13T19:51:31.935030998Z" level=info msg="CreateContainer within sandbox \"5d016089fe1aae3990e7a83081e130020776c6822d15a14a8d1393245754b0b8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:51:31.959843 containerd[1485]: time="2025-02-13T19:51:31.959770224Z" level=info msg="CreateContainer within sandbox \"5d016089fe1aae3990e7a83081e130020776c6822d15a14a8d1393245754b0b8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e39fc23594467cbdb64831a102aea4cb734ef4b923935ed01c14afcae8321848\"" Feb 13 19:51:31.960920 containerd[1485]: time="2025-02-13T19:51:31.960887338Z" level=info msg="StartContainer for \"e39fc23594467cbdb64831a102aea4cb734ef4b923935ed01c14afcae8321848\"" Feb 13 19:51:31.999542 systemd[1]: Started cri-containerd-e39fc23594467cbdb64831a102aea4cb734ef4b923935ed01c14afcae8321848.scope - libcontainer container e39fc23594467cbdb64831a102aea4cb734ef4b923935ed01c14afcae8321848. Feb 13 19:51:32.040219 containerd[1485]: time="2025-02-13T19:51:32.040161958Z" level=info msg="StartContainer for \"e39fc23594467cbdb64831a102aea4cb734ef4b923935ed01c14afcae8321848\" returns successfully" Feb 13 19:51:32.205207 kubelet[1788]: E0213 19:51:32.204993 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:32.822688 kubelet[1788]: E0213 19:51:32.822635 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:32.834654 kubelet[1788]: I0213 19:51:32.834542 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l5bhd" podStartSLOduration=4.627182682 podStartE2EDuration="6.834508227s" podCreationTimestamp="2025-02-13 19:51:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:29.725584977 +0000 UTC m=+3.853245856" lastFinishedPulling="2025-02-13 19:51:31.932910522 +0000 UTC m=+6.060571401" observedRunningTime="2025-02-13 19:51:32.834184691 +0000 UTC m=+6.961845590" watchObservedRunningTime="2025-02-13 19:51:32.834508227 +0000 UTC m=+6.962169116" Feb 13 19:51:32.852017 kubelet[1788]: E0213 19:51:32.851847 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.852017 kubelet[1788]: W0213 19:51:32.851887 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.852017 kubelet[1788]: E0213 19:51:32.851918 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.852273 kubelet[1788]: E0213 19:51:32.852238 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.852299 kubelet[1788]: W0213 19:51:32.852271 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.852321 kubelet[1788]: E0213 19:51:32.852301 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.852770 kubelet[1788]: E0213 19:51:32.852725 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.852770 kubelet[1788]: W0213 19:51:32.852762 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.852846 kubelet[1788]: E0213 19:51:32.852776 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.853124 kubelet[1788]: E0213 19:51:32.853110 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.853167 kubelet[1788]: W0213 19:51:32.853120 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.853167 kubelet[1788]: E0213 19:51:32.853147 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.853443 kubelet[1788]: E0213 19:51:32.853421 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.853443 kubelet[1788]: W0213 19:51:32.853442 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.853518 kubelet[1788]: E0213 19:51:32.853459 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.853694 kubelet[1788]: E0213 19:51:32.853680 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.853694 kubelet[1788]: W0213 19:51:32.853692 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.853765 kubelet[1788]: E0213 19:51:32.853701 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.853918 kubelet[1788]: E0213 19:51:32.853906 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.853918 kubelet[1788]: W0213 19:51:32.853915 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.853985 kubelet[1788]: E0213 19:51:32.853926 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.854132 kubelet[1788]: E0213 19:51:32.854118 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.854132 kubelet[1788]: W0213 19:51:32.854128 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.854195 kubelet[1788]: E0213 19:51:32.854136 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.854326 kubelet[1788]: E0213 19:51:32.854312 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.854326 kubelet[1788]: W0213 19:51:32.854324 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.854411 kubelet[1788]: E0213 19:51:32.854334 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.854562 kubelet[1788]: E0213 19:51:32.854548 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.854562 kubelet[1788]: W0213 19:51:32.854558 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.854632 kubelet[1788]: E0213 19:51:32.854566 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.854764 kubelet[1788]: E0213 19:51:32.854752 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.854764 kubelet[1788]: W0213 19:51:32.854760 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.854831 kubelet[1788]: E0213 19:51:32.854768 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.854945 kubelet[1788]: E0213 19:51:32.854932 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.854945 kubelet[1788]: W0213 19:51:32.854941 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.855008 kubelet[1788]: E0213 19:51:32.854948 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.855145 kubelet[1788]: E0213 19:51:32.855131 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.855145 kubelet[1788]: W0213 19:51:32.855143 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.855217 kubelet[1788]: E0213 19:51:32.855151 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.855337 kubelet[1788]: E0213 19:51:32.855325 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.855337 kubelet[1788]: W0213 19:51:32.855333 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.855423 kubelet[1788]: E0213 19:51:32.855341 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.855543 kubelet[1788]: E0213 19:51:32.855531 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.855543 kubelet[1788]: W0213 19:51:32.855539 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.855609 kubelet[1788]: E0213 19:51:32.855546 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.855734 kubelet[1788]: E0213 19:51:32.855720 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.855734 kubelet[1788]: W0213 19:51:32.855732 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.855791 kubelet[1788]: E0213 19:51:32.855741 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.855946 kubelet[1788]: E0213 19:51:32.855933 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.855946 kubelet[1788]: W0213 19:51:32.855943 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.856013 kubelet[1788]: E0213 19:51:32.855951 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.856131 kubelet[1788]: E0213 19:51:32.856124 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.856166 kubelet[1788]: W0213 19:51:32.856131 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.856166 kubelet[1788]: E0213 19:51:32.856139 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.856319 kubelet[1788]: E0213 19:51:32.856306 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.856319 kubelet[1788]: W0213 19:51:32.856315 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.856412 kubelet[1788]: E0213 19:51:32.856321 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.856664 kubelet[1788]: E0213 19:51:32.856645 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.856664 kubelet[1788]: W0213 19:51:32.856657 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.856712 kubelet[1788]: E0213 19:51:32.856665 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.871785 kubelet[1788]: E0213 19:51:32.871746 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.871785 kubelet[1788]: W0213 19:51:32.871772 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.871917 kubelet[1788]: E0213 19:51:32.871798 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.872086 kubelet[1788]: E0213 19:51:32.872059 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.872086 kubelet[1788]: W0213 19:51:32.872072 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.872086 kubelet[1788]: E0213 19:51:32.872087 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.872405 kubelet[1788]: E0213 19:51:32.872352 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.872405 kubelet[1788]: W0213 19:51:32.872374 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.872470 kubelet[1788]: E0213 19:51:32.872438 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.872704 kubelet[1788]: E0213 19:51:32.872671 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.872704 kubelet[1788]: W0213 19:51:32.872701 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.872781 kubelet[1788]: E0213 19:51:32.872720 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.872951 kubelet[1788]: E0213 19:51:32.872932 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.872951 kubelet[1788]: W0213 19:51:32.872942 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.873015 kubelet[1788]: E0213 19:51:32.872955 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.873201 kubelet[1788]: E0213 19:51:32.873185 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.873201 kubelet[1788]: W0213 19:51:32.873198 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.873252 kubelet[1788]: E0213 19:51:32.873213 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.873450 kubelet[1788]: E0213 19:51:32.873435 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.873450 kubelet[1788]: W0213 19:51:32.873446 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.873519 kubelet[1788]: E0213 19:51:32.873460 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.873668 kubelet[1788]: E0213 19:51:32.873650 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.873668 kubelet[1788]: W0213 19:51:32.873660 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.873732 kubelet[1788]: E0213 19:51:32.873673 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.873968 kubelet[1788]: E0213 19:51:32.873919 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.873968 kubelet[1788]: W0213 19:51:32.873939 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.873968 kubelet[1788]: E0213 19:51:32.873956 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.874269 kubelet[1788]: E0213 19:51:32.874253 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.874269 kubelet[1788]: W0213 19:51:32.874266 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.874312 kubelet[1788]: E0213 19:51:32.874281 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.874517 kubelet[1788]: E0213 19:51:32.874505 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.874517 kubelet[1788]: W0213 19:51:32.874516 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.874573 kubelet[1788]: E0213 19:51:32.874529 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:32.874772 kubelet[1788]: E0213 19:51:32.874759 1788 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:51:32.874772 kubelet[1788]: W0213 19:51:32.874770 1788 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:51:32.874824 kubelet[1788]: E0213 19:51:32.874778 1788 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:51:33.206337 kubelet[1788]: E0213 19:51:33.206178 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:33.536331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137094326.mount: Deactivated successfully. Feb 13 19:51:33.617673 containerd[1485]: time="2025-02-13T19:51:33.617606106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.618483 containerd[1485]: time="2025-02-13T19:51:33.618392400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:51:33.619820 containerd[1485]: time="2025-02-13T19:51:33.619679814Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.622066 containerd[1485]: time="2025-02-13T19:51:33.622024210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:33.623192 containerd[1485]: time="2025-02-13T19:51:33.622858323Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.689696911s" Feb 13 19:51:33.623192 containerd[1485]: time="2025-02-13T19:51:33.622896295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:51:33.625464 containerd[1485]: time="2025-02-13T19:51:33.625419746Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:51:33.644736 containerd[1485]: time="2025-02-13T19:51:33.644660022Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379\"" Feb 13 19:51:33.645487 containerd[1485]: time="2025-02-13T19:51:33.645429014Z" level=info msg="StartContainer for \"a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379\"" Feb 13 19:51:33.684647 systemd[1]: Started cri-containerd-a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379.scope - libcontainer container a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379. Feb 13 19:51:33.751471 systemd[1]: cri-containerd-a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379.scope: Deactivated successfully. Feb 13 19:51:33.806895 kubelet[1788]: E0213 19:51:33.806712 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:33.822045 containerd[1485]: time="2025-02-13T19:51:33.821991889Z" level=info msg="StartContainer for \"a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379\" returns successfully" Feb 13 19:51:33.824707 kubelet[1788]: E0213 19:51:33.824679 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:33.824842 kubelet[1788]: E0213 19:51:33.824721 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:34.204947 containerd[1485]: time="2025-02-13T19:51:34.204757184Z" level=info msg="shim disconnected" id=a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379 namespace=k8s.io Feb 13 19:51:34.204947 containerd[1485]: time="2025-02-13T19:51:34.204822156Z" level=warning msg="cleaning up after shim disconnected" id=a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379 namespace=k8s.io Feb 13 19:51:34.204947 containerd[1485]: time="2025-02-13T19:51:34.204831413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:34.207204 kubelet[1788]: E0213 19:51:34.207132 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:34.514456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a13714c357d7f80e7374bd1a952bfdbeeee759befc6f482d5215108d2d201379-rootfs.mount: Deactivated successfully. Feb 13 19:51:34.827727 kubelet[1788]: E0213 19:51:34.827584 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:34.828483 containerd[1485]: time="2025-02-13T19:51:34.828435560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:51:35.208514 kubelet[1788]: E0213 19:51:35.208288 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:35.806797 kubelet[1788]: E0213 19:51:35.806737 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:36.209685 kubelet[1788]: E0213 19:51:36.209521 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:37.210378 kubelet[1788]: E0213 19:51:37.210306 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:37.805982 kubelet[1788]: E0213 19:51:37.805873 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:38.210838 kubelet[1788]: E0213 19:51:38.210678 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:38.287210 containerd[1485]: time="2025-02-13T19:51:38.287134470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.292371 containerd[1485]: time="2025-02-13T19:51:38.292276511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:51:38.294050 containerd[1485]: time="2025-02-13T19:51:38.294010422Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.297444 containerd[1485]: time="2025-02-13T19:51:38.297368910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:38.298127 containerd[1485]: time="2025-02-13T19:51:38.298080865Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.469599328s" Feb 13 19:51:38.298173 containerd[1485]: time="2025-02-13T19:51:38.298122603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:51:38.300750 containerd[1485]: time="2025-02-13T19:51:38.300714943Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:51:38.335312 containerd[1485]: time="2025-02-13T19:51:38.335211565Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe\"" Feb 13 19:51:38.336077 containerd[1485]: time="2025-02-13T19:51:38.336029739Z" level=info msg="StartContainer for \"c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe\"" Feb 13 19:51:38.367604 systemd[1]: Started cri-containerd-c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe.scope - libcontainer container c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe. Feb 13 19:51:38.401634 containerd[1485]: time="2025-02-13T19:51:38.401440854Z" level=info msg="StartContainer for \"c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe\" returns successfully" Feb 13 19:51:38.835815 kubelet[1788]: E0213 19:51:38.835758 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:39.211658 kubelet[1788]: E0213 19:51:39.211490 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:39.806750 kubelet[1788]: E0213 19:51:39.806661 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:39.837719 kubelet[1788]: E0213 19:51:39.837629 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:40.168481 systemd[1]: cri-containerd-c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe.scope: Deactivated successfully. Feb 13 19:51:40.192865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe-rootfs.mount: Deactivated successfully. Feb 13 19:51:40.212123 kubelet[1788]: E0213 19:51:40.212050 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:40.252687 kubelet[1788]: I0213 19:51:40.247990 1788 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:51:40.426464 containerd[1485]: time="2025-02-13T19:51:40.426218572Z" level=info msg="shim disconnected" id=c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe namespace=k8s.io Feb 13 19:51:40.426464 containerd[1485]: time="2025-02-13T19:51:40.426293162Z" level=warning msg="cleaning up after shim disconnected" id=c607f322b96a46bd82960629d4440e2b8599423b3f5218290d868cef995e41fe namespace=k8s.io Feb 13 19:51:40.426464 containerd[1485]: time="2025-02-13T19:51:40.426307609Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:51:40.841231 kubelet[1788]: E0213 19:51:40.841073 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:40.841708 containerd[1485]: time="2025-02-13T19:51:40.841652601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:51:41.212521 kubelet[1788]: E0213 19:51:41.212436 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:41.812698 systemd[1]: Created slice kubepods-besteffort-podc3950c8a_700c_4c8b_8e8b_c3137c3cc22f.slice - libcontainer container kubepods-besteffort-podc3950c8a_700c_4c8b_8e8b_c3137c3cc22f.slice. Feb 13 19:51:41.815692 containerd[1485]: time="2025-02-13T19:51:41.815644445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:0,}" Feb 13 19:51:41.889492 containerd[1485]: time="2025-02-13T19:51:41.889416818Z" level=error msg="Failed to destroy network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:41.889977 containerd[1485]: time="2025-02-13T19:51:41.889934930Z" level=error msg="encountered an error cleaning up failed sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:41.890066 containerd[1485]: time="2025-02-13T19:51:41.890027253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:41.890408 kubelet[1788]: E0213 19:51:41.890328 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:41.890562 kubelet[1788]: E0213 19:51:41.890432 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:41.890562 kubelet[1788]: E0213 19:51:41.890468 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:41.890562 kubelet[1788]: E0213 19:51:41.890522 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:41.891637 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496-shm.mount: Deactivated successfully. Feb 13 19:51:42.212823 kubelet[1788]: E0213 19:51:42.212750 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:42.845609 kubelet[1788]: I0213 19:51:42.845566 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496" Feb 13 19:51:42.846427 containerd[1485]: time="2025-02-13T19:51:42.846369404Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" Feb 13 19:51:42.846815 containerd[1485]: time="2025-02-13T19:51:42.846688252Z" level=info msg="Ensure that sandbox 15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496 in task-service has been cleanup successfully" Feb 13 19:51:42.846949 containerd[1485]: time="2025-02-13T19:51:42.846927310Z" level=info msg="TearDown network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" successfully" Feb 13 19:51:42.846982 containerd[1485]: time="2025-02-13T19:51:42.846946647Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" returns successfully" Feb 13 19:51:42.847523 containerd[1485]: time="2025-02-13T19:51:42.847494624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:1,}" Feb 13 19:51:42.848656 systemd[1]: run-netns-cni\x2dd921695a\x2dd744\x2d6f48\x2d8014\x2d49a7a095a6f9.mount: Deactivated successfully. Feb 13 19:51:42.930118 containerd[1485]: time="2025-02-13T19:51:42.929982198Z" level=error msg="Failed to destroy network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:42.930570 containerd[1485]: time="2025-02-13T19:51:42.930520347Z" level=error msg="encountered an error cleaning up failed sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:42.930639 containerd[1485]: time="2025-02-13T19:51:42.930613191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:42.931295 kubelet[1788]: E0213 19:51:42.930888 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:42.931295 kubelet[1788]: E0213 19:51:42.931009 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:42.931295 kubelet[1788]: E0213 19:51:42.931033 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:42.931532 kubelet[1788]: E0213 19:51:42.931084 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:42.932209 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc-shm.mount: Deactivated successfully. Feb 13 19:51:43.212995 kubelet[1788]: E0213 19:51:43.212945 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:43.848768 kubelet[1788]: I0213 19:51:43.848737 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc" Feb 13 19:51:43.849436 containerd[1485]: time="2025-02-13T19:51:43.849372194Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\"" Feb 13 19:51:43.849775 containerd[1485]: time="2025-02-13T19:51:43.849690020Z" level=info msg="Ensure that sandbox 34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc in task-service has been cleanup successfully" Feb 13 19:51:43.849974 containerd[1485]: time="2025-02-13T19:51:43.849940800Z" level=info msg="TearDown network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" successfully" Feb 13 19:51:43.849974 containerd[1485]: time="2025-02-13T19:51:43.849966178Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" returns successfully" Feb 13 19:51:43.850419 containerd[1485]: time="2025-02-13T19:51:43.850350779Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" Feb 13 19:51:43.850588 containerd[1485]: time="2025-02-13T19:51:43.850497805Z" level=info msg="TearDown network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" successfully" Feb 13 19:51:43.850588 containerd[1485]: time="2025-02-13T19:51:43.850514947Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" returns successfully" Feb 13 19:51:43.851002 containerd[1485]: time="2025-02-13T19:51:43.850960082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:2,}" Feb 13 19:51:43.851411 systemd[1]: run-netns-cni\x2d38dac3dc\x2d9f6b\x2d07b2\x2dbad7\x2d7f774a8bc1b3.mount: Deactivated successfully. Feb 13 19:51:44.168346 systemd[1]: Created slice kubepods-besteffort-pod56474fd3_a840_47fd_8f80_78f96c78e294.slice - libcontainer container kubepods-besteffort-pod56474fd3_a840_47fd_8f80_78f96c78e294.slice. Feb 13 19:51:44.213772 kubelet[1788]: E0213 19:51:44.213698 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:44.258524 kubelet[1788]: I0213 19:51:44.258477 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcqxh\" (UniqueName: \"kubernetes.io/projected/56474fd3-a840-47fd-8f80-78f96c78e294-kube-api-access-pcqxh\") pod \"nginx-deployment-7fcdb87857-gmvsz\" (UID: \"56474fd3-a840-47fd-8f80-78f96c78e294\") " pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:44.374650 containerd[1485]: time="2025-02-13T19:51:44.374584237Z" level=error msg="Failed to destroy network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.375138 containerd[1485]: time="2025-02-13T19:51:44.375111716Z" level=error msg="encountered an error cleaning up failed sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.375217 containerd[1485]: time="2025-02-13T19:51:44.375179072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.375513 kubelet[1788]: E0213 19:51:44.375466 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.375615 kubelet[1788]: E0213 19:51:44.375551 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:44.375615 kubelet[1788]: E0213 19:51:44.375580 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:44.375695 kubelet[1788]: E0213 19:51:44.375648 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:44.473126 containerd[1485]: time="2025-02-13T19:51:44.472420099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:0,}" Feb 13 19:51:44.684830 containerd[1485]: time="2025-02-13T19:51:44.684772369Z" level=error msg="Failed to destroy network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.685252 containerd[1485]: time="2025-02-13T19:51:44.685206644Z" level=error msg="encountered an error cleaning up failed sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.685307 containerd[1485]: time="2025-02-13T19:51:44.685274270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.685611 kubelet[1788]: E0213 19:51:44.685541 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.685666 kubelet[1788]: E0213 19:51:44.685635 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:44.685712 kubelet[1788]: E0213 19:51:44.685660 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:44.685747 kubelet[1788]: E0213 19:51:44.685712 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gmvsz" podUID="56474fd3-a840-47fd-8f80-78f96c78e294" Feb 13 19:51:44.853463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9-shm.mount: Deactivated successfully. Feb 13 19:51:44.854988 kubelet[1788]: I0213 19:51:44.854957 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9" Feb 13 19:51:44.856019 containerd[1485]: time="2025-02-13T19:51:44.855935480Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\"" Feb 13 19:51:44.856366 containerd[1485]: time="2025-02-13T19:51:44.856277642Z" level=info msg="Ensure that sandbox 431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9 in task-service has been cleanup successfully" Feb 13 19:51:44.858097 containerd[1485]: time="2025-02-13T19:51:44.856494418Z" level=info msg="TearDown network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" successfully" Feb 13 19:51:44.858097 containerd[1485]: time="2025-02-13T19:51:44.856514165Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" returns successfully" Feb 13 19:51:44.858171 systemd[1]: run-netns-cni\x2dfaa7e504\x2d777c\x2dac10\x2dec9d\x2ddcaff66d82a1.mount: Deactivated successfully. Feb 13 19:51:44.858909 containerd[1485]: time="2025-02-13T19:51:44.858429336Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\"" Feb 13 19:51:44.858909 containerd[1485]: time="2025-02-13T19:51:44.858524695Z" level=info msg="TearDown network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" successfully" Feb 13 19:51:44.858909 containerd[1485]: time="2025-02-13T19:51:44.858544121Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" returns successfully" Feb 13 19:51:44.859035 kubelet[1788]: I0213 19:51:44.858740 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd" Feb 13 19:51:44.859290 containerd[1485]: time="2025-02-13T19:51:44.859259393Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\"" Feb 13 19:51:44.859333 containerd[1485]: time="2025-02-13T19:51:44.859291302Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" Feb 13 19:51:44.859428 containerd[1485]: time="2025-02-13T19:51:44.859407691Z" level=info msg="TearDown network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" successfully" Feb 13 19:51:44.859464 containerd[1485]: time="2025-02-13T19:51:44.859425985Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" returns successfully" Feb 13 19:51:44.859464 containerd[1485]: time="2025-02-13T19:51:44.859452084Z" level=info msg="Ensure that sandbox 6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd in task-service has been cleanup successfully" Feb 13 19:51:44.859911 containerd[1485]: time="2025-02-13T19:51:44.859881198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:3,}" Feb 13 19:51:44.860495 containerd[1485]: time="2025-02-13T19:51:44.860465805Z" level=info msg="TearDown network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" successfully" Feb 13 19:51:44.860495 containerd[1485]: time="2025-02-13T19:51:44.860489830Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" returns successfully" Feb 13 19:51:44.860882 containerd[1485]: time="2025-02-13T19:51:44.860859022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:1,}" Feb 13 19:51:44.861891 systemd[1]: run-netns-cni\x2d0ecb2519\x2d061b\x2d345c\x2d1584\x2df69c656bf371.mount: Deactivated successfully. Feb 13 19:51:44.965446 containerd[1485]: time="2025-02-13T19:51:44.965361514Z" level=error msg="Failed to destroy network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.967636 containerd[1485]: time="2025-02-13T19:51:44.966809349Z" level=error msg="encountered an error cleaning up failed sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.967636 containerd[1485]: time="2025-02-13T19:51:44.966939142Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.967762 kubelet[1788]: E0213 19:51:44.967167 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.967762 kubelet[1788]: E0213 19:51:44.967224 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:44.967762 kubelet[1788]: E0213 19:51:44.967252 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:44.967902 kubelet[1788]: E0213 19:51:44.967298 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:44.970056 containerd[1485]: time="2025-02-13T19:51:44.969998108Z" level=error msg="Failed to destroy network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.970466 containerd[1485]: time="2025-02-13T19:51:44.970436840Z" level=error msg="encountered an error cleaning up failed sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.970525 containerd[1485]: time="2025-02-13T19:51:44.970501862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.970928 kubelet[1788]: E0213 19:51:44.970821 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:44.970989 kubelet[1788]: E0213 19:51:44.970932 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:44.970989 kubelet[1788]: E0213 19:51:44.970962 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:44.971121 kubelet[1788]: E0213 19:51:44.971027 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gmvsz" podUID="56474fd3-a840-47fd-8f80-78f96c78e294" Feb 13 19:51:45.214076 kubelet[1788]: E0213 19:51:45.213948 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:45.852799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e-shm.mount: Deactivated successfully. Feb 13 19:51:45.862155 kubelet[1788]: I0213 19:51:45.862123 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e" Feb 13 19:51:45.862674 containerd[1485]: time="2025-02-13T19:51:45.862639895Z" level=info msg="StopPodSandbox for \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\"" Feb 13 19:51:45.863076 containerd[1485]: time="2025-02-13T19:51:45.862892882Z" level=info msg="Ensure that sandbox 6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e in task-service has been cleanup successfully" Feb 13 19:51:45.864917 systemd[1]: run-netns-cni\x2d3bb3ebec\x2dbc6d\x2d9a34\x2d9de8\x2de226b9f9d6c8.mount: Deactivated successfully. Feb 13 19:51:45.865874 containerd[1485]: time="2025-02-13T19:51:45.865823830Z" level=info msg="TearDown network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\" successfully" Feb 13 19:51:45.865874 containerd[1485]: time="2025-02-13T19:51:45.865861924Z" level=info msg="StopPodSandbox for \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\" returns successfully" Feb 13 19:51:45.866156 kubelet[1788]: I0213 19:51:45.866117 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd" Feb 13 19:51:45.866618 containerd[1485]: time="2025-02-13T19:51:45.866561481Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\"" Feb 13 19:51:45.866790 containerd[1485]: time="2025-02-13T19:51:45.866763069Z" level=info msg="Ensure that sandbox 3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd in task-service has been cleanup successfully" Feb 13 19:51:45.867066 containerd[1485]: time="2025-02-13T19:51:45.867026016Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\"" Feb 13 19:51:45.867181 containerd[1485]: time="2025-02-13T19:51:45.867129355Z" level=info msg="TearDown network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" successfully" Feb 13 19:51:45.867215 containerd[1485]: time="2025-02-13T19:51:45.867179943Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" returns successfully" Feb 13 19:51:45.867303 containerd[1485]: time="2025-02-13T19:51:45.867275477Z" level=info msg="TearDown network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" successfully" Feb 13 19:51:45.867303 containerd[1485]: time="2025-02-13T19:51:45.867296688Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" returns successfully" Feb 13 19:51:45.868136 containerd[1485]: time="2025-02-13T19:51:45.868109684Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\"" Feb 13 19:51:45.868748 containerd[1485]: time="2025-02-13T19:51:45.868550454Z" level=info msg="TearDown network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" successfully" Feb 13 19:51:45.868748 containerd[1485]: time="2025-02-13T19:51:45.868564681Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" returns successfully" Feb 13 19:51:45.868748 containerd[1485]: time="2025-02-13T19:51:45.868632392Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\"" Feb 13 19:51:45.868748 containerd[1485]: time="2025-02-13T19:51:45.868697477Z" level=info msg="TearDown network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" successfully" Feb 13 19:51:45.868748 containerd[1485]: time="2025-02-13T19:51:45.868706394Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" returns successfully" Feb 13 19:51:45.869315 containerd[1485]: time="2025-02-13T19:51:45.869296391Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" Feb 13 19:51:45.869433 systemd[1]: run-netns-cni\x2d76952739\x2d957a\x2ddb53\x2d851c\x2d4ae72c9d333f.mount: Deactivated successfully. Feb 13 19:51:45.869541 containerd[1485]: time="2025-02-13T19:51:45.869435749Z" level=info msg="TearDown network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" successfully" Feb 13 19:51:45.869541 containerd[1485]: time="2025-02-13T19:51:45.869449376Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" returns successfully" Feb 13 19:51:45.869541 containerd[1485]: time="2025-02-13T19:51:45.869533578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:2,}" Feb 13 19:51:45.870458 containerd[1485]: time="2025-02-13T19:51:45.870431889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:4,}" Feb 13 19:51:46.199993 kubelet[1788]: E0213 19:51:46.199858 1788 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:46.214433 kubelet[1788]: E0213 19:51:46.214392 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:46.306268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759151223.mount: Deactivated successfully. Feb 13 19:51:46.797273 containerd[1485]: time="2025-02-13T19:51:46.797214951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:46.801487 containerd[1485]: time="2025-02-13T19:51:46.801452893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:51:46.805579 containerd[1485]: time="2025-02-13T19:51:46.805547159Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:46.810166 containerd[1485]: time="2025-02-13T19:51:46.810033409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:46.812368 containerd[1485]: time="2025-02-13T19:51:46.812201018Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.970495978s" Feb 13 19:51:46.812368 containerd[1485]: time="2025-02-13T19:51:46.812233962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:51:46.824505 containerd[1485]: time="2025-02-13T19:51:46.824462195Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:51:46.852214 containerd[1485]: time="2025-02-13T19:51:46.852142350Z" level=info msg="CreateContainer within sandbox \"9ff27ff6f6a869c3d371390a7ee7f69b3e9b51a694a3af4ae2b9e25757dcbec2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ed7545281684accf1c3e0ee0902fb35659965b6109e3c393be8fcd9539a0fd5\"" Feb 13 19:51:46.853026 containerd[1485]: time="2025-02-13T19:51:46.852995942Z" level=info msg="StartContainer for \"4ed7545281684accf1c3e0ee0902fb35659965b6109e3c393be8fcd9539a0fd5\"" Feb 13 19:51:46.859613 containerd[1485]: time="2025-02-13T19:51:46.859565770Z" level=error msg="Failed to destroy network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.860055 containerd[1485]: time="2025-02-13T19:51:46.860022599Z" level=error msg="encountered an error cleaning up failed sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.860159 containerd[1485]: time="2025-02-13T19:51:46.860112041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.860427 kubelet[1788]: E0213 19:51:46.860362 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.860480 kubelet[1788]: E0213 19:51:46.860462 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:46.860508 kubelet[1788]: E0213 19:51:46.860492 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:46.861347 kubelet[1788]: E0213 19:51:46.860542 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gmvsz" podUID="56474fd3-a840-47fd-8f80-78f96c78e294" Feb 13 19:51:46.861916 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402-shm.mount: Deactivated successfully. Feb 13 19:51:46.867167 containerd[1485]: time="2025-02-13T19:51:46.867094142Z" level=error msg="Failed to destroy network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.870067 containerd[1485]: time="2025-02-13T19:51:46.870022625Z" level=error msg="encountered an error cleaning up failed sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.870151 containerd[1485]: time="2025-02-13T19:51:46.870115944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.870103 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857-shm.mount: Deactivated successfully. Feb 13 19:51:46.871449 kubelet[1788]: E0213 19:51:46.870297 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.871449 kubelet[1788]: E0213 19:51:46.870345 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:46.871449 kubelet[1788]: E0213 19:51:46.870371 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-h8s4b" Feb 13 19:51:46.871553 containerd[1485]: time="2025-02-13T19:51:46.871163680Z" level=info msg="StopPodSandbox for \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\"" Feb 13 19:51:46.871553 containerd[1485]: time="2025-02-13T19:51:46.871401367Z" level=info msg="Ensure that sandbox 574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402 in task-service has been cleanup successfully" Feb 13 19:51:46.871619 kubelet[1788]: E0213 19:51:46.870476 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-h8s4b_calico-system(c3950c8a-700c-4c8b-8e8b-c3137c3cc22f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-h8s4b" podUID="c3950c8a-700c-4c8b-8e8b-c3137c3cc22f" Feb 13 19:51:46.871619 kubelet[1788]: I0213 19:51:46.870751 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402" Feb 13 19:51:46.871711 containerd[1485]: time="2025-02-13T19:51:46.871618145Z" level=info msg="TearDown network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\" successfully" Feb 13 19:51:46.871711 containerd[1485]: time="2025-02-13T19:51:46.871630869Z" level=info msg="StopPodSandbox for \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\" returns successfully" Feb 13 19:51:46.872278 containerd[1485]: time="2025-02-13T19:51:46.872101434Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\"" Feb 13 19:51:46.872278 containerd[1485]: time="2025-02-13T19:51:46.872187259Z" level=info msg="TearDown network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" successfully" Feb 13 19:51:46.872278 containerd[1485]: time="2025-02-13T19:51:46.872196778Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" returns successfully" Feb 13 19:51:46.872937 containerd[1485]: time="2025-02-13T19:51:46.872686359Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\"" Feb 13 19:51:46.872937 containerd[1485]: time="2025-02-13T19:51:46.872759530Z" level=info msg="TearDown network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" successfully" Feb 13 19:51:46.872937 containerd[1485]: time="2025-02-13T19:51:46.872767765Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" returns successfully" Feb 13 19:51:46.873160 containerd[1485]: time="2025-02-13T19:51:46.873139942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:3,}" Feb 13 19:51:46.874686 systemd[1]: run-netns-cni\x2d842b4679\x2dd310\x2d4f97\x2d0fea\x2dac7a2b64d176.mount: Deactivated successfully. Feb 13 19:51:46.890637 systemd[1]: Started cri-containerd-4ed7545281684accf1c3e0ee0902fb35659965b6109e3c393be8fcd9539a0fd5.scope - libcontainer container 4ed7545281684accf1c3e0ee0902fb35659965b6109e3c393be8fcd9539a0fd5. Feb 13 19:51:46.932192 containerd[1485]: time="2025-02-13T19:51:46.932100540Z" level=info msg="StartContainer for \"4ed7545281684accf1c3e0ee0902fb35659965b6109e3c393be8fcd9539a0fd5\" returns successfully" Feb 13 19:51:46.944620 containerd[1485]: time="2025-02-13T19:51:46.944551881Z" level=error msg="Failed to destroy network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.945225 containerd[1485]: time="2025-02-13T19:51:46.944963232Z" level=error msg="encountered an error cleaning up failed sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.945225 containerd[1485]: time="2025-02-13T19:51:46.945027916Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.945338 kubelet[1788]: E0213 19:51:46.945274 1788 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:51:46.945378 kubelet[1788]: E0213 19:51:46.945341 1788 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:46.945378 kubelet[1788]: E0213 19:51:46.945368 1788 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-gmvsz" Feb 13 19:51:46.945550 kubelet[1788]: E0213 19:51:46.945431 1788 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-gmvsz_default(56474fd3-a840-47fd-8f80-78f96c78e294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-gmvsz" podUID="56474fd3-a840-47fd-8f80-78f96c78e294" Feb 13 19:51:46.995857 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:51:46.996028 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:51:47.214665 kubelet[1788]: E0213 19:51:47.214595 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:47.877964 kubelet[1788]: I0213 19:51:47.877865 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942" Feb 13 19:51:47.878578 containerd[1485]: time="2025-02-13T19:51:47.878541095Z" level=info msg="StopPodSandbox for \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\"" Feb 13 19:51:47.878996 containerd[1485]: time="2025-02-13T19:51:47.878768061Z" level=info msg="Ensure that sandbox e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942 in task-service has been cleanup successfully" Feb 13 19:51:47.878996 containerd[1485]: time="2025-02-13T19:51:47.878984096Z" level=info msg="TearDown network for sandbox \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\" successfully" Feb 13 19:51:47.878996 containerd[1485]: time="2025-02-13T19:51:47.878997161Z" level=info msg="StopPodSandbox for \"e734fa1514fdd839c07aa53c9cb2eff43e2ce19dc3cb54c4209ed50859b31942\" returns successfully" Feb 13 19:51:47.879464 containerd[1485]: time="2025-02-13T19:51:47.879262791Z" level=info msg="StopPodSandbox for \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\"" Feb 13 19:51:47.879464 containerd[1485]: time="2025-02-13T19:51:47.879398502Z" level=info msg="TearDown network for sandbox \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\" successfully" Feb 13 19:51:47.879464 containerd[1485]: time="2025-02-13T19:51:47.879413130Z" level=info msg="StopPodSandbox for \"574c4e98ea435805a207cde65f61f8fad61bdb53a1348b144191b1859ff0f402\" returns successfully" Feb 13 19:51:47.879719 containerd[1485]: time="2025-02-13T19:51:47.879688388Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\"" Feb 13 19:51:47.879924 containerd[1485]: time="2025-02-13T19:51:47.879776397Z" level=info msg="TearDown network for sandbox \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" successfully" Feb 13 19:51:47.879924 containerd[1485]: time="2025-02-13T19:51:47.879794733Z" level=info msg="StopPodSandbox for \"3a8292aefdc43567442f915d981ec2b5cfc36fac526aa37a5c396639b8ab52dd\" returns successfully" Feb 13 19:51:47.880344 containerd[1485]: time="2025-02-13T19:51:47.880175384Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\"" Feb 13 19:51:47.880344 containerd[1485]: time="2025-02-13T19:51:47.880277460Z" level=info msg="TearDown network for sandbox \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" successfully" Feb 13 19:51:47.880344 containerd[1485]: time="2025-02-13T19:51:47.880294022Z" level=info msg="StopPodSandbox for \"6b931b8cd2a0f8a1525e7aa3a0e7f029d4cb8ca800708b05d12333e860f930bd\" returns successfully" Feb 13 19:51:47.880930 containerd[1485]: time="2025-02-13T19:51:47.880794293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:4,}" Feb 13 19:51:47.881150 systemd[1]: run-netns-cni\x2dd1eb37b9\x2de0ac\x2d8c3b\x2d3a06\x2d28b26539492d.mount: Deactivated successfully. Feb 13 19:51:47.882430 kubelet[1788]: E0213 19:51:47.882248 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:47.885090 kubelet[1788]: I0213 19:51:47.885060 1788 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857" Feb 13 19:51:47.887601 containerd[1485]: time="2025-02-13T19:51:47.887528606Z" level=info msg="StopPodSandbox for \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\"" Feb 13 19:51:47.887790 containerd[1485]: time="2025-02-13T19:51:47.887760371Z" level=info msg="Ensure that sandbox 5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857 in task-service has been cleanup successfully" Feb 13 19:51:47.888119 containerd[1485]: time="2025-02-13T19:51:47.887985624Z" level=info msg="TearDown network for sandbox \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\" successfully" Feb 13 19:51:47.888119 containerd[1485]: time="2025-02-13T19:51:47.888006584Z" level=info msg="StopPodSandbox for \"5cc0fabca8283f25da4eaf48de6c44c77ff3ccb82d367772b047908d546e6857\" returns successfully" Feb 13 19:51:47.889980 systemd[1]: run-netns-cni\x2db6335a63\x2dbdcd\x2da61a\x2db5b9\x2dd0e882aa305b.mount: Deactivated successfully. Feb 13 19:51:47.890375 containerd[1485]: time="2025-02-13T19:51:47.890336780Z" level=info msg="StopPodSandbox for \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\"" Feb 13 19:51:47.890471 containerd[1485]: time="2025-02-13T19:51:47.890452071Z" level=info msg="TearDown network for sandbox \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\" successfully" Feb 13 19:51:47.890471 containerd[1485]: time="2025-02-13T19:51:47.890466910Z" level=info msg="StopPodSandbox for \"6e8b9377388a3c072cdc71f9d98c08591e8569ce8359274b1949afe88def139e\" returns successfully" Feb 13 19:51:47.890894 containerd[1485]: time="2025-02-13T19:51:47.890857880Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\"" Feb 13 19:51:47.890994 containerd[1485]: time="2025-02-13T19:51:47.890974795Z" level=info msg="TearDown network for sandbox \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" successfully" Feb 13 19:51:47.891029 containerd[1485]: time="2025-02-13T19:51:47.890991597Z" level=info msg="StopPodSandbox for \"431776e600b622e8bb1b88d9c1295acbd9fdd48bd3d34a8d59948cee810d5eb9\" returns successfully" Feb 13 19:51:47.891375 containerd[1485]: time="2025-02-13T19:51:47.891246757Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\"" Feb 13 19:51:47.891375 containerd[1485]: time="2025-02-13T19:51:47.891324477Z" level=info msg="TearDown network for sandbox \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" successfully" Feb 13 19:51:47.891375 containerd[1485]: time="2025-02-13T19:51:47.891333163Z" level=info msg="StopPodSandbox for \"34f8d7da5d1e12bf2375d0e38e10b5cf15cf279f133a34314ffc111dc637cefc\" returns successfully" Feb 13 19:51:47.892031 containerd[1485]: time="2025-02-13T19:51:47.892007869Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\"" Feb 13 19:51:47.892135 containerd[1485]: time="2025-02-13T19:51:47.892117550Z" level=info msg="TearDown network for sandbox \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" successfully" Feb 13 19:51:47.892165 containerd[1485]: time="2025-02-13T19:51:47.892133591Z" level=info msg="StopPodSandbox for \"15a61bbbb7c84cd96aea0b67315d5924e3d6e98bb1c898830f11866a55454496\" returns successfully" Feb 13 19:51:47.892580 containerd[1485]: time="2025-02-13T19:51:47.892559108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:5,}" Feb 13 19:51:47.897685 kubelet[1788]: I0213 19:51:47.896294 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vcj9q" podStartSLOduration=4.811774301 podStartE2EDuration="21.896274824s" podCreationTimestamp="2025-02-13 19:51:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:29.728500995 +0000 UTC m=+3.856161884" lastFinishedPulling="2025-02-13 19:51:46.813001528 +0000 UTC m=+20.940662407" observedRunningTime="2025-02-13 19:51:47.896136538 +0000 UTC m=+22.023797417" watchObservedRunningTime="2025-02-13 19:51:47.896274824 +0000 UTC m=+22.023935703" Feb 13 19:51:48.015906 systemd-networkd[1401]: cali4af8e01ab34: Link UP Feb 13 19:51:48.016219 systemd-networkd[1401]: cali4af8e01ab34: Gained carrier Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.928 [INFO][2700] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.940 [INFO][2700] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0 nginx-deployment-7fcdb87857- default 56474fd3-a840-47fd-8f80-78f96c78e294 1086 0 2025-02-13 19:51:44 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.110 nginx-deployment-7fcdb87857-gmvsz eth0 default [] [] [kns.default ksa.default.default] cali4af8e01ab34 [] []}} ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.940 [INFO][2700] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.974 [INFO][2735] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" HandleID="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Workload="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2735] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" HandleID="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Workload="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d0c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.110", "pod":"nginx-deployment-7fcdb87857-gmvsz", "timestamp":"2025-02-13 19:51:47.974532815 +0000 UTC"}, Hostname:"10.0.0.110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2735] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.110' Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.984 [INFO][2735] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.987 [INFO][2735] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.991 [INFO][2735] ipam/ipam.go 489: Trying affinity for 192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.993 [INFO][2735] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.995 [INFO][2735] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.995 [INFO][2735] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.64/26 handle="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:47.997 [INFO][2735] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931 Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:48.000 [INFO][2735] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.64/26 handle="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:48.004 [INFO][2735] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.65/26] block=192.168.70.64/26 handle="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:48.004 [INFO][2735] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.65/26] handle="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" host="10.0.0.110" Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:48.005 [INFO][2735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:48.025344 containerd[1485]: 2025-02-13 19:51:48.005 [INFO][2735] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.65/26] IPv6=[] ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" HandleID="k8s-pod-network.833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Workload="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.009 [INFO][2700] cni-plugin/k8s.go 386: Populated endpoint ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"56474fd3-a840-47fd-8f80-78f96c78e294", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-gmvsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4af8e01ab34", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.009 [INFO][2700] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.65/32] ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.009 [INFO][2700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4af8e01ab34 ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.015 [INFO][2700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.015 [INFO][2700] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"56474fd3-a840-47fd-8f80-78f96c78e294", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931", Pod:"nginx-deployment-7fcdb87857-gmvsz", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali4af8e01ab34", MAC:"be:40:a7:a8:46:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:48.025905 containerd[1485]: 2025-02-13 19:51:48.023 [INFO][2700] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931" Namespace="default" Pod="nginx-deployment-7fcdb87857-gmvsz" WorkloadEndpoint="10.0.0.110-k8s-nginx--deployment--7fcdb87857--gmvsz-eth0" Feb 13 19:51:48.050518 containerd[1485]: time="2025-02-13T19:51:48.050156346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:48.050518 containerd[1485]: time="2025-02-13T19:51:48.050234566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:48.050518 containerd[1485]: time="2025-02-13T19:51:48.050251959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.050518 containerd[1485]: time="2025-02-13T19:51:48.050362331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.072565 systemd[1]: Started cri-containerd-833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931.scope - libcontainer container 833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931. Feb 13 19:51:48.084616 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:48.108825 containerd[1485]: time="2025-02-13T19:51:48.108763940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-gmvsz,Uid:56474fd3-a840-47fd-8f80-78f96c78e294,Namespace:default,Attempt:4,} returns sandbox id \"833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931\"" Feb 13 19:51:48.110112 containerd[1485]: time="2025-02-13T19:51:48.110017875Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:51:48.123321 systemd-networkd[1401]: calif91f7a356c5: Link UP Feb 13 19:51:48.123798 systemd-networkd[1401]: calif91f7a356c5: Gained carrier Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.938 [INFO][2717] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.947 [INFO][2717] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.110-k8s-csi--node--driver--h8s4b-eth0 csi-node-driver- calico-system c3950c8a-700c-4c8b-8e8b-c3137c3cc22f 885 0 2025-02-13 19:51:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.110 csi-node-driver-h8s4b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif91f7a356c5 [] []}} ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.947 [INFO][2717] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.973 [INFO][2736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" HandleID="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Workload="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" HandleID="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Workload="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.110", "pod":"csi-node-driver-h8s4b", "timestamp":"2025-02-13 19:51:47.973703172 +0000 UTC"}, Hostname:"10.0.0.110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:47.982 [INFO][2736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.005 [INFO][2736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.005 [INFO][2736] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.110' Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.085 [INFO][2736] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.090 [INFO][2736] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.095 [INFO][2736] ipam/ipam.go 489: Trying affinity for 192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.097 [INFO][2736] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.100 [INFO][2736] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.100 [INFO][2736] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.64/26 handle="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.102 [INFO][2736] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1 Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.109 [INFO][2736] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.64/26 handle="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.116 [INFO][2736] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.66/26] block=192.168.70.64/26 handle="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.116 [INFO][2736] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.66/26] handle="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" host="10.0.0.110" Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.116 [INFO][2736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:48.135066 containerd[1485]: 2025-02-13 19:51:48.116 [INFO][2736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.66/26] IPv6=[] ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" HandleID="k8s-pod-network.d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Workload="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.120 [INFO][2717] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-csi--node--driver--h8s4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"", Pod:"csi-node-driver-h8s4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif91f7a356c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.121 [INFO][2717] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.66/32] ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.121 [INFO][2717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif91f7a356c5 ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.123 [INFO][2717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.123 [INFO][2717] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-csi--node--driver--h8s4b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c3950c8a-700c-4c8b-8e8b-c3137c3cc22f", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1", Pod:"csi-node-driver-h8s4b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.70.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif91f7a356c5", MAC:"ca:e3:23:cc:c7:74", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:48.135630 containerd[1485]: 2025-02-13 19:51:48.132 [INFO][2717] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1" Namespace="calico-system" Pod="csi-node-driver-h8s4b" WorkloadEndpoint="10.0.0.110-k8s-csi--node--driver--h8s4b-eth0" Feb 13 19:51:48.156125 containerd[1485]: time="2025-02-13T19:51:48.155431816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:48.156125 containerd[1485]: time="2025-02-13T19:51:48.156057035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:48.156125 containerd[1485]: time="2025-02-13T19:51:48.156072875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.156311 containerd[1485]: time="2025-02-13T19:51:48.156198737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:48.178662 systemd[1]: Started cri-containerd-d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1.scope - libcontainer container d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1. Feb 13 19:51:48.189935 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:48.200663 containerd[1485]: time="2025-02-13T19:51:48.200620687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-h8s4b,Uid:c3950c8a-700c-4c8b-8e8b-c3137c3cc22f,Namespace:calico-system,Attempt:5,} returns sandbox id \"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1\"" Feb 13 19:51:48.215801 kubelet[1788]: E0213 19:51:48.215755 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:48.580414 kernel: bpftool[2982]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:51:48.805512 systemd-networkd[1401]: vxlan.calico: Link UP Feb 13 19:51:48.805524 systemd-networkd[1401]: vxlan.calico: Gained carrier Feb 13 19:51:48.892722 kubelet[1788]: E0213 19:51:48.892661 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:51:49.216771 kubelet[1788]: E0213 19:51:49.216716 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:49.834851 systemd-networkd[1401]: calif91f7a356c5: Gained IPv6LL Feb 13 19:51:49.898577 systemd-networkd[1401]: cali4af8e01ab34: Gained IPv6LL Feb 13 19:51:50.154514 systemd-networkd[1401]: vxlan.calico: Gained IPv6LL Feb 13 19:51:50.217408 kubelet[1788]: E0213 19:51:50.217328 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:51.035772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216403395.mount: Deactivated successfully. Feb 13 19:51:51.218360 kubelet[1788]: E0213 19:51:51.218311 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:52.218669 kubelet[1788]: E0213 19:51:52.218611 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:52.918016 containerd[1485]: time="2025-02-13T19:51:52.917953457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:52.918773 containerd[1485]: time="2025-02-13T19:51:52.918727725Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:51:52.920105 containerd[1485]: time="2025-02-13T19:51:52.920066890Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:52.922938 containerd[1485]: time="2025-02-13T19:51:52.922898683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:52.924188 containerd[1485]: time="2025-02-13T19:51:52.924134852Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.814078132s" Feb 13 19:51:52.924253 containerd[1485]: time="2025-02-13T19:51:52.924186189Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:51:52.925533 containerd[1485]: time="2025-02-13T19:51:52.925372133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:51:52.926511 containerd[1485]: time="2025-02-13T19:51:52.926483913Z" level=info msg="CreateContainer within sandbox \"833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:51:52.944220 containerd[1485]: time="2025-02-13T19:51:52.944143228Z" level=info msg="CreateContainer within sandbox \"833716071d04f23cf21879f87ba6a295be9ef8173ce0a4e486efa65b140fe931\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cc8d7f61f41ecd46663b1f82ff5dfde6a83a377070dce73f98e3e3071053680e\"" Feb 13 19:51:52.944771 containerd[1485]: time="2025-02-13T19:51:52.944732663Z" level=info msg="StartContainer for \"cc8d7f61f41ecd46663b1f82ff5dfde6a83a377070dce73f98e3e3071053680e\"" Feb 13 19:51:53.021583 systemd[1]: Started cri-containerd-cc8d7f61f41ecd46663b1f82ff5dfde6a83a377070dce73f98e3e3071053680e.scope - libcontainer container cc8d7f61f41ecd46663b1f82ff5dfde6a83a377070dce73f98e3e3071053680e. Feb 13 19:51:53.077862 containerd[1485]: time="2025-02-13T19:51:53.077780942Z" level=info msg="StartContainer for \"cc8d7f61f41ecd46663b1f82ff5dfde6a83a377070dce73f98e3e3071053680e\" returns successfully" Feb 13 19:51:53.219756 kubelet[1788]: E0213 19:51:53.219589 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:53.967796 kubelet[1788]: I0213 19:51:53.967699 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-gmvsz" podStartSLOduration=5.152322345 podStartE2EDuration="9.967676205s" podCreationTimestamp="2025-02-13 19:51:44 +0000 UTC" firstStartedPulling="2025-02-13 19:51:48.109771802 +0000 UTC m=+22.237432681" lastFinishedPulling="2025-02-13 19:51:52.925125662 +0000 UTC m=+27.052786541" observedRunningTime="2025-02-13 19:51:53.96761084 +0000 UTC m=+28.095271719" watchObservedRunningTime="2025-02-13 19:51:53.967676205 +0000 UTC m=+28.095337084" Feb 13 19:51:54.220380 kubelet[1788]: E0213 19:51:54.220238 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:55.169925 containerd[1485]: time="2025-02-13T19:51:55.169876711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:55.170652 containerd[1485]: time="2025-02-13T19:51:55.170613112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:51:55.171859 containerd[1485]: time="2025-02-13T19:51:55.171797645Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:55.173976 containerd[1485]: time="2025-02-13T19:51:55.173949138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:55.174548 containerd[1485]: time="2025-02-13T19:51:55.174522839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.249098657s" Feb 13 19:51:55.174592 containerd[1485]: time="2025-02-13T19:51:55.174550762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:51:55.176526 containerd[1485]: time="2025-02-13T19:51:55.176495321Z" level=info msg="CreateContainer within sandbox \"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:51:55.194320 containerd[1485]: time="2025-02-13T19:51:55.194269041Z" level=info msg="CreateContainer within sandbox \"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"978c60cde59df272d8281ea3afd77b5f397a59cadcb7f943d3323f9928e8642b\"" Feb 13 19:51:55.195166 containerd[1485]: time="2025-02-13T19:51:55.194668339Z" level=info msg="StartContainer for \"978c60cde59df272d8281ea3afd77b5f397a59cadcb7f943d3323f9928e8642b\"" Feb 13 19:51:55.220484 kubelet[1788]: E0213 19:51:55.220356 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:55.232547 systemd[1]: Started cri-containerd-978c60cde59df272d8281ea3afd77b5f397a59cadcb7f943d3323f9928e8642b.scope - libcontainer container 978c60cde59df272d8281ea3afd77b5f397a59cadcb7f943d3323f9928e8642b. Feb 13 19:51:55.343785 containerd[1485]: time="2025-02-13T19:51:55.343728922Z" level=info msg="StartContainer for \"978c60cde59df272d8281ea3afd77b5f397a59cadcb7f943d3323f9928e8642b\" returns successfully" Feb 13 19:51:55.345055 containerd[1485]: time="2025-02-13T19:51:55.345029587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:51:56.004766 systemd[1]: Created slice kubepods-besteffort-pod25ae13a0_022e_476f_957e_b46b37151ade.slice - libcontainer container kubepods-besteffort-pod25ae13a0_022e_476f_957e_b46b37151ade.slice. Feb 13 19:51:56.037857 kubelet[1788]: I0213 19:51:56.037792 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/25ae13a0-022e-476f-957e-b46b37151ade-data\") pod \"nfs-server-provisioner-0\" (UID: \"25ae13a0-022e-476f-957e-b46b37151ade\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:56.037857 kubelet[1788]: I0213 19:51:56.037835 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxlw\" (UniqueName: \"kubernetes.io/projected/25ae13a0-022e-476f-957e-b46b37151ade-kube-api-access-mpxlw\") pod \"nfs-server-provisioner-0\" (UID: \"25ae13a0-022e-476f-957e-b46b37151ade\") " pod="default/nfs-server-provisioner-0" Feb 13 19:51:56.220764 kubelet[1788]: E0213 19:51:56.220702 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:56.308370 containerd[1485]: time="2025-02-13T19:51:56.308171653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:25ae13a0-022e-476f-957e-b46b37151ade,Namespace:default,Attempt:0,}" Feb 13 19:51:56.544121 systemd-networkd[1401]: cali60e51b789ff: Link UP Feb 13 19:51:56.544339 systemd-networkd[1401]: cali60e51b789ff: Gained carrier Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.470 [INFO][3215] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.110-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 25ae13a0-022e-476f-957e-b46b37151ade 1176 0 2025-02-13 19:51:55 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.110 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.470 [INFO][3215] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.497 [INFO][3229] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" HandleID="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Workload="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.507 [INFO][3229] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" HandleID="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Workload="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f7cb0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.110", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:51:56.497747712 +0000 UTC"}, Hostname:"10.0.0.110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.507 [INFO][3229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.507 [INFO][3229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.507 [INFO][3229] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.110' Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.510 [INFO][3229] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.515 [INFO][3229] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.519 [INFO][3229] ipam/ipam.go 489: Trying affinity for 192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.521 [INFO][3229] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.524 [INFO][3229] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.524 [INFO][3229] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.64/26 handle="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.525 [INFO][3229] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.531 [INFO][3229] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.64/26 handle="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.537 [INFO][3229] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.67/26] block=192.168.70.64/26 handle="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.537 [INFO][3229] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.67/26] handle="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" host="10.0.0.110" Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.537 [INFO][3229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:51:56.559034 containerd[1485]: 2025-02-13 19:51:56.537 [INFO][3229] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.67/26] IPv6=[] ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" HandleID="k8s-pod-network.c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Workload="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.560261 containerd[1485]: 2025-02-13 19:51:56.539 [INFO][3215] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"25ae13a0-022e-476f-957e-b46b37151ade", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.70.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:56.560261 containerd[1485]: 2025-02-13 19:51:56.540 [INFO][3215] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.67/32] ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.560261 containerd[1485]: 2025-02-13 19:51:56.540 [INFO][3215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.560261 containerd[1485]: 2025-02-13 19:51:56.542 [INFO][3215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.560468 containerd[1485]: 2025-02-13 19:51:56.542 [INFO][3215] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"25ae13a0-022e-476f-957e-b46b37151ade", ResourceVersion:"1176", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.70.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:8a:e2:2a:43:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:51:56.560468 containerd[1485]: 2025-02-13 19:51:56.553 [INFO][3215] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.110-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:51:56.650437 containerd[1485]: time="2025-02-13T19:51:56.650226388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:51:56.650892 containerd[1485]: time="2025-02-13T19:51:56.650855133Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:51:56.650892 containerd[1485]: time="2025-02-13T19:51:56.650881073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:56.650992 containerd[1485]: time="2025-02-13T19:51:56.650966114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:51:56.674544 systemd[1]: Started cri-containerd-c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c.scope - libcontainer container c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c. Feb 13 19:51:56.687018 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:51:56.714907 containerd[1485]: time="2025-02-13T19:51:56.714857990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:25ae13a0-022e-476f-957e-b46b37151ade,Namespace:default,Attempt:0,} returns sandbox id \"c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c\"" Feb 13 19:51:57.221409 kubelet[1788]: E0213 19:51:57.221339 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:57.330915 containerd[1485]: time="2025-02-13T19:51:57.330843546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:57.342234 containerd[1485]: time="2025-02-13T19:51:57.342156589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:51:57.350486 containerd[1485]: time="2025-02-13T19:51:57.350443134Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:57.353100 containerd[1485]: time="2025-02-13T19:51:57.353040407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:51:57.353871 containerd[1485]: time="2025-02-13T19:51:57.353793887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.008733302s" Feb 13 19:51:57.353871 containerd[1485]: time="2025-02-13T19:51:57.353847689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:51:57.354983 containerd[1485]: time="2025-02-13T19:51:57.354960012Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:51:57.356011 containerd[1485]: time="2025-02-13T19:51:57.355984777Z" level=info msg="CreateContainer within sandbox \"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:51:57.375702 containerd[1485]: time="2025-02-13T19:51:57.375626586Z" level=info msg="CreateContainer within sandbox \"d0121e6d30da916fb83380b2fd55b516d57a4dc5ceb86f9fb4d0b73b2ae158d1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"da611b5ac531093c4843533696ea786c4e123dce1866408ec661881e094bcfd6\"" Feb 13 19:51:57.376304 containerd[1485]: time="2025-02-13T19:51:57.376257504Z" level=info msg="StartContainer for \"da611b5ac531093c4843533696ea786c4e123dce1866408ec661881e094bcfd6\"" Feb 13 19:51:57.408559 systemd[1]: Started cri-containerd-da611b5ac531093c4843533696ea786c4e123dce1866408ec661881e094bcfd6.scope - libcontainer container da611b5ac531093c4843533696ea786c4e123dce1866408ec661881e094bcfd6. Feb 13 19:51:57.476533 containerd[1485]: time="2025-02-13T19:51:57.476343187Z" level=info msg="StartContainer for \"da611b5ac531093c4843533696ea786c4e123dce1866408ec661881e094bcfd6\" returns successfully" Feb 13 19:51:57.706731 systemd-networkd[1401]: cali60e51b789ff: Gained IPv6LL Feb 13 19:51:57.822996 kubelet[1788]: I0213 19:51:57.822871 1788 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:51:57.822996 kubelet[1788]: I0213 19:51:57.822910 1788 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:51:57.990476 update_engine[1467]: I20250213 19:51:57.990324 1467 update_attempter.cc:509] Updating boot flags... Feb 13 19:51:58.039204 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3341) Feb 13 19:51:58.086495 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3342) Feb 13 19:51:58.133438 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3342) Feb 13 19:51:58.221666 kubelet[1788]: E0213 19:51:58.221557 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:51:59.221843 kubelet[1788]: E0213 19:51:59.221786 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:00.222106 kubelet[1788]: E0213 19:52:00.222049 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:01.062572 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112890395.mount: Deactivated successfully. Feb 13 19:52:01.222670 kubelet[1788]: E0213 19:52:01.222596 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:02.223577 kubelet[1788]: E0213 19:52:02.223526 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:03.223888 kubelet[1788]: E0213 19:52:03.223821 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:04.224984 kubelet[1788]: E0213 19:52:04.224923 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:05.039367 containerd[1485]: time="2025-02-13T19:52:05.039272210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.044295 containerd[1485]: time="2025-02-13T19:52:05.044152359Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:52:05.045956 containerd[1485]: time="2025-02-13T19:52:05.045922144Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.049404 containerd[1485]: time="2025-02-13T19:52:05.049205837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:05.050146 containerd[1485]: time="2025-02-13T19:52:05.050093594Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 7.695102634s" Feb 13 19:52:05.050146 containerd[1485]: time="2025-02-13T19:52:05.050125865Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:52:05.053125 containerd[1485]: time="2025-02-13T19:52:05.053050140Z" level=info msg="CreateContainer within sandbox \"c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:52:05.069580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745080015.mount: Deactivated successfully. Feb 13 19:52:05.072930 containerd[1485]: time="2025-02-13T19:52:05.072868683Z" level=info msg="CreateContainer within sandbox \"c4a26907c0604a1ce930e4dc7b4d202a33a99789d46c9f001f28b5eb420fa96c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5c776093a11370de9aa101c527272d11a1cd8e9348df1409e90a48d494314ca0\"" Feb 13 19:52:05.073589 containerd[1485]: time="2025-02-13T19:52:05.073558446Z" level=info msg="StartContainer for \"5c776093a11370de9aa101c527272d11a1cd8e9348df1409e90a48d494314ca0\"" Feb 13 19:52:05.112792 systemd[1]: Started cri-containerd-5c776093a11370de9aa101c527272d11a1cd8e9348df1409e90a48d494314ca0.scope - libcontainer container 5c776093a11370de9aa101c527272d11a1cd8e9348df1409e90a48d494314ca0. Feb 13 19:52:05.143509 containerd[1485]: time="2025-02-13T19:52:05.143454959Z" level=info msg="StartContainer for \"5c776093a11370de9aa101c527272d11a1cd8e9348df1409e90a48d494314ca0\" returns successfully" Feb 13 19:52:05.225903 kubelet[1788]: E0213 19:52:05.225847 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:06.050521 kubelet[1788]: I0213 19:52:06.050441 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-h8s4b" podStartSLOduration=30.897514301 podStartE2EDuration="40.050419812s" podCreationTimestamp="2025-02-13 19:51:26 +0000 UTC" firstStartedPulling="2025-02-13 19:51:48.201809055 +0000 UTC m=+22.329469934" lastFinishedPulling="2025-02-13 19:51:57.354714566 +0000 UTC m=+31.482375445" observedRunningTime="2025-02-13 19:51:58.06769011 +0000 UTC m=+32.195350989" watchObservedRunningTime="2025-02-13 19:52:06.050419812 +0000 UTC m=+40.178080711" Feb 13 19:52:06.050711 kubelet[1788]: I0213 19:52:06.050635 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.715514276 podStartE2EDuration="11.050628937s" podCreationTimestamp="2025-02-13 19:51:55 +0000 UTC" firstStartedPulling="2025-02-13 19:51:56.716127493 +0000 UTC m=+30.843788383" lastFinishedPulling="2025-02-13 19:52:05.051242165 +0000 UTC m=+39.178903044" observedRunningTime="2025-02-13 19:52:06.050168037 +0000 UTC m=+40.177828916" watchObservedRunningTime="2025-02-13 19:52:06.050628937 +0000 UTC m=+40.178289816" Feb 13 19:52:06.200549 kubelet[1788]: E0213 19:52:06.200456 1788 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:06.226618 kubelet[1788]: E0213 19:52:06.226505 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:07.227187 kubelet[1788]: E0213 19:52:07.227124 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:08.227475 kubelet[1788]: E0213 19:52:08.227409 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:09.227900 kubelet[1788]: E0213 19:52:09.227839 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:10.229005 kubelet[1788]: E0213 19:52:10.228825 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:11.229885 kubelet[1788]: E0213 19:52:11.229800 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:12.230340 kubelet[1788]: E0213 19:52:12.230249 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:13.231479 kubelet[1788]: E0213 19:52:13.231370 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.232190 kubelet[1788]: E0213 19:52:14.232126 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:14.532028 systemd[1]: Created slice kubepods-besteffort-pode4b398e4_09bd_42d0_a2a0_bb66ad6e0d96.slice - libcontainer container kubepods-besteffort-pode4b398e4_09bd_42d0_a2a0_bb66ad6e0d96.slice. Feb 13 19:52:14.560333 kubelet[1788]: I0213 19:52:14.560285 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-aa134a7f-2917-419e-b8f2-7f2e33e7fabb\" (UniqueName: \"kubernetes.io/nfs/e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96-pvc-aa134a7f-2917-419e-b8f2-7f2e33e7fabb\") pod \"test-pod-1\" (UID: \"e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96\") " pod="default/test-pod-1" Feb 13 19:52:14.560333 kubelet[1788]: I0213 19:52:14.560331 1788 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbn5\" (UniqueName: \"kubernetes.io/projected/e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96-kube-api-access-8mbn5\") pod \"test-pod-1\" (UID: \"e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96\") " pod="default/test-pod-1" Feb 13 19:52:14.687422 kernel: FS-Cache: Loaded Feb 13 19:52:14.754737 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:52:14.754863 kernel: RPC: Registered udp transport module. Feb 13 19:52:14.754891 kernel: RPC: Registered tcp transport module. Feb 13 19:52:14.754935 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:52:14.755418 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:52:15.075652 kernel: NFS: Registering the id_resolver key type Feb 13 19:52:15.075783 kernel: Key type id_resolver registered Feb 13 19:52:15.075804 kernel: Key type id_legacy registered Feb 13 19:52:15.103949 nfsidmap[3492]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:52:15.108876 nfsidmap[3495]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:52:15.135309 containerd[1485]: time="2025-02-13T19:52:15.135250050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96,Namespace:default,Attempt:0,}" Feb 13 19:52:15.232581 kubelet[1788]: E0213 19:52:15.232515 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:16.094740 systemd-networkd[1401]: cali5ec59c6bf6e: Link UP Feb 13 19:52:16.095788 systemd-networkd[1401]: cali5ec59c6bf6e: Gained carrier Feb 13 19:52:16.233709 kubelet[1788]: E0213 19:52:16.233642 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.682 [INFO][3498] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.110-k8s-test--pod--1-eth0 default e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96 1281 0 2025-02-13 19:51:56 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.110 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.682 [INFO][3498] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.709 [INFO][3511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" HandleID="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Workload="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.719 [INFO][3511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" HandleID="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Workload="10.0.0.110-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.110", "pod":"test-pod-1", "timestamp":"2025-02-13 19:52:15.709156087 +0000 UTC"}, Hostname:"10.0.0.110", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.719 [INFO][3511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.719 [INFO][3511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.719 [INFO][3511] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.110' Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.722 [INFO][3511] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.726 [INFO][3511] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.731 [INFO][3511] ipam/ipam.go 489: Trying affinity for 192.168.70.64/26 host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.813 [INFO][3511] ipam/ipam.go 155: Attempting to load block cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.817 [INFO][3511] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.70.64/26 host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.817 [INFO][3511] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.70.64/26 handle="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:15.819 [INFO][3511] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.003 [INFO][3511] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.70.64/26 handle="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.090 [INFO][3511] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.70.68/26] block=192.168.70.64/26 handle="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.090 [INFO][3511] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.70.68/26] handle="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" host="10.0.0.110" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.090 [INFO][3511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.090 [INFO][3511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.70.68/26] IPv6=[] ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" HandleID="k8s-pod-network.f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Workload="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.371724 containerd[1485]: 2025-02-13 19:52:16.092 [INFO][3498] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:16.372804 containerd[1485]: 2025-02-13 19:52:16.092 [INFO][3498] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.70.68/32] ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.372804 containerd[1485]: 2025-02-13 19:52:16.092 [INFO][3498] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.372804 containerd[1485]: 2025-02-13 19:52:16.095 [INFO][3498] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:16.372804 containerd[1485]: 2025-02-13 19:52:16.096 [INFO][3498] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.110-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96", ResourceVersion:"1281", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 51, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.110", ContainerID:"f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.70.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"96:45:98:c9:29:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:52:16.372804 containerd[1485]: 2025-02-13 19:52:16.368 [INFO][3498] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.110-k8s-test--pod--1-eth0" Feb 13 19:52:17.234330 kubelet[1788]: E0213 19:52:17.234266 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:17.290564 systemd-networkd[1401]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:52:18.234970 kubelet[1788]: E0213 19:52:18.234904 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.235642 kubelet[1788]: E0213 19:52:19.235567 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:19.271410 kubelet[1788]: E0213 19:52:19.269211 1788 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:52:19.412320 containerd[1485]: time="2025-02-13T19:52:19.412240744Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:52:19.412813 containerd[1485]: time="2025-02-13T19:52:19.412289896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:52:19.412813 containerd[1485]: time="2025-02-13T19:52:19.412345612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:19.412813 containerd[1485]: time="2025-02-13T19:52:19.412524919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:52:19.434555 systemd[1]: Started cri-containerd-f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf.scope - libcontainer container f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf. Feb 13 19:52:19.445812 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:52:19.473131 containerd[1485]: time="2025-02-13T19:52:19.473065331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e4b398e4-09bd-42d0-a2a0-bb66ad6e0d96,Namespace:default,Attempt:0,} returns sandbox id \"f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf\"" Feb 13 19:52:19.474486 containerd[1485]: time="2025-02-13T19:52:19.474450957Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:52:20.235758 kubelet[1788]: E0213 19:52:20.235681 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:20.571276 containerd[1485]: time="2025-02-13T19:52:20.570832323Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:52:20.574725 containerd[1485]: time="2025-02-13T19:52:20.574674778Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:52:20.578801 containerd[1485]: time="2025-02-13T19:52:20.578728831Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 1.104232618s" Feb 13 19:52:20.578801 containerd[1485]: time="2025-02-13T19:52:20.578785818Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:52:20.581259 containerd[1485]: time="2025-02-13T19:52:20.581220628Z" level=info msg="CreateContainer within sandbox \"f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:52:20.684285 containerd[1485]: time="2025-02-13T19:52:20.684197125Z" level=info msg="CreateContainer within sandbox \"f376cc4fd71f5e83cd36441579e75c3c1f9bb0214a0cc92e480f6795a0d2bbdf\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"58a26116c4e78cbf95a31623c1372455a43216192ac1f997d64b411144db13b0\"" Feb 13 19:52:20.685025 containerd[1485]: time="2025-02-13T19:52:20.684974328Z" level=info msg="StartContainer for \"58a26116c4e78cbf95a31623c1372455a43216192ac1f997d64b411144db13b0\"" Feb 13 19:52:20.720710 systemd[1]: Started cri-containerd-58a26116c4e78cbf95a31623c1372455a43216192ac1f997d64b411144db13b0.scope - libcontainer container 58a26116c4e78cbf95a31623c1372455a43216192ac1f997d64b411144db13b0. Feb 13 19:52:20.763237 containerd[1485]: time="2025-02-13T19:52:20.763168595Z" level=info msg="StartContainer for \"58a26116c4e78cbf95a31623c1372455a43216192ac1f997d64b411144db13b0\" returns successfully" Feb 13 19:52:21.009155 kubelet[1788]: I0213 19:52:21.009083 1788 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.903593373 podStartE2EDuration="25.009061985s" podCreationTimestamp="2025-02-13 19:51:56 +0000 UTC" firstStartedPulling="2025-02-13 19:52:19.474146015 +0000 UTC m=+53.601806894" lastFinishedPulling="2025-02-13 19:52:20.579614627 +0000 UTC m=+54.707275506" observedRunningTime="2025-02-13 19:52:21.00890001 +0000 UTC m=+55.136560899" watchObservedRunningTime="2025-02-13 19:52:21.009061985 +0000 UTC m=+55.136722864" Feb 13 19:52:21.236286 kubelet[1788]: E0213 19:52:21.236210 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:22.237402 kubelet[1788]: E0213 19:52:22.237351 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:23.238072 kubelet[1788]: E0213 19:52:23.238003 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:52:24.238634 kubelet[1788]: E0213 19:52:24.238567 1788 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"