Feb 13 15:27:30.888517 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:27:30.888538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:27:30.888549 kernel: BIOS-provided physical RAM map: Feb 13 15:27:30.888555 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:27:30.888561 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:27:30.888567 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:27:30.888575 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:27:30.888581 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:27:30.888587 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:27:30.888593 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:27:30.888601 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:27:30.888607 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:27:30.888614 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:27:30.888620 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:27:30.888628 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:27:30.888634 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:27:30.888643 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:27:30.888650 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:27:30.888656 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:27:30.888663 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:27:30.888669 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:27:30.888676 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:27:30.888682 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:27:30.888689 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:27:30.888695 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:27:30.888702 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:27:30.888708 kernel: NX (Execute Disable) protection: active Feb 13 15:27:30.888717 kernel: APIC: Static calls initialized Feb 13 15:27:30.888723 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:27:30.888730 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:27:30.888736 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:27:30.888743 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:27:30.888749 kernel: extended physical RAM map: Feb 13 15:27:30.888756 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:27:30.888762 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:27:30.888769 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:27:30.888776 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:27:30.888782 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:27:30.888791 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:27:30.888798 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:27:30.888808 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:27:30.888815 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:27:30.888822 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:27:30.888828 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:27:30.888835 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:27:30.888844 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:27:30.888851 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:27:30.888858 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:27:30.888876 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:27:30.888883 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:27:30.888890 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:27:30.888897 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:27:30.888904 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:27:30.888911 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:27:30.888920 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:27:30.888927 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:27:30.888934 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:27:30.888941 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:27:30.888948 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:27:30.888955 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:27:30.888962 kernel: efi: EFI v2.7 by EDK II Feb 13 15:27:30.888969 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:27:30.888975 kernel: random: crng init done Feb 13 15:27:30.888983 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:27:30.888989 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:27:30.888998 kernel: secureboot: Secure boot disabled Feb 13 15:27:30.889005 kernel: SMBIOS 2.8 present. Feb 13 15:27:30.889012 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:27:30.889019 kernel: Hypervisor detected: KVM Feb 13 15:27:30.889026 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:27:30.889033 kernel: kvm-clock: using sched offset of 2570432823 cycles Feb 13 15:27:30.889040 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:27:30.889047 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:27:30.889054 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:27:30.889062 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:27:30.889069 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:27:30.889078 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:27:30.889085 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:27:30.889092 kernel: Using GB pages for direct mapping Feb 13 15:27:30.889099 kernel: ACPI: Early table checksum verification disabled Feb 13 15:27:30.889106 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:27:30.889113 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:27:30.889120 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889128 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889134 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:27:30.889144 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889151 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889158 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889165 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:27:30.889172 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:27:30.889179 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:27:30.889186 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:27:30.889193 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:27:30.889202 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:27:30.889209 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:27:30.889216 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:27:30.889223 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:27:30.889230 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:27:30.889237 kernel: No NUMA configuration found Feb 13 15:27:30.889256 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:27:30.889274 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:27:30.889282 kernel: Zone ranges: Feb 13 15:27:30.889289 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:27:30.889299 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:27:30.889306 kernel: Normal empty Feb 13 15:27:30.889313 kernel: Movable zone start for each node Feb 13 15:27:30.889320 kernel: Early memory node ranges Feb 13 15:27:30.889327 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:27:30.889334 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:27:30.889340 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:27:30.889347 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:27:30.889354 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:27:30.889363 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:27:30.889370 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:27:30.889377 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:27:30.889384 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:27:30.889391 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:27:30.889399 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:27:30.889413 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:27:30.889422 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:27:30.889429 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:27:30.889436 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:27:30.889444 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:27:30.889451 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:27:30.889460 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:27:30.889468 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:27:30.889475 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:27:30.889482 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:27:30.889490 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:27:30.889500 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:27:30.889507 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:27:30.889514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:27:30.889522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:27:30.889529 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:27:30.889536 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:27:30.889544 kernel: TSC deadline timer available Feb 13 15:27:30.889551 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:27:30.889558 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:27:30.889568 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:27:30.889575 kernel: kvm-guest: setup PV sched yield Feb 13 15:27:30.889582 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:27:30.889590 kernel: Booting paravirtualized kernel on KVM Feb 13 15:27:30.889597 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:27:30.889605 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:27:30.889612 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:27:30.889620 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:27:30.889627 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:27:30.889634 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:27:30.889644 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:27:30.889652 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:27:30.889660 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:27:30.889667 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:27:30.889675 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:27:30.889682 kernel: Fallback order for Node 0: 0 Feb 13 15:27:30.889690 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:27:30.889697 kernel: Policy zone: DMA32 Feb 13 15:27:30.889707 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:27:30.889714 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:27:30.889722 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:27:30.889729 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:27:30.889737 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:27:30.889744 kernel: Dynamic Preempt: voluntary Feb 13 15:27:30.889752 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:27:30.889759 kernel: rcu: RCU event tracing is enabled. Feb 13 15:27:30.889767 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:27:30.889777 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:27:30.889784 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:27:30.889792 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:27:30.889799 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:27:30.889807 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:27:30.889814 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:27:30.889822 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:27:30.889829 kernel: Console: colour dummy device 80x25 Feb 13 15:27:30.889837 kernel: printk: console [ttyS0] enabled Feb 13 15:27:30.889846 kernel: ACPI: Core revision 20230628 Feb 13 15:27:30.889854 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:27:30.889869 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:27:30.889876 kernel: x2apic enabled Feb 13 15:27:30.889884 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:27:30.889891 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:27:30.889899 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:27:30.889907 kernel: kvm-guest: setup PV IPIs Feb 13 15:27:30.889914 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:27:30.889924 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:27:30.889931 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:27:30.889939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:27:30.889946 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:27:30.889953 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:27:30.889961 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:27:30.889968 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:27:30.889976 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:27:30.889983 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:27:30.889993 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:27:30.890001 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:27:30.890009 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:27:30.890016 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:27:30.890024 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:27:30.890032 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:27:30.890039 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:27:30.890047 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:27:30.890057 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:27:30.890064 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:27:30.890071 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:27:30.890079 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:27:30.890087 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:27:30.890094 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:27:30.890102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:27:30.890111 kernel: landlock: Up and running. Feb 13 15:27:30.890120 kernel: SELinux: Initializing. Feb 13 15:27:30.890132 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:27:30.890142 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:27:30.890151 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:27:30.890161 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:27:30.890170 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:27:30.890180 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:27:30.890189 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:27:30.890199 kernel: ... version: 0 Feb 13 15:27:30.890208 kernel: ... bit width: 48 Feb 13 15:27:30.890220 kernel: ... generic registers: 6 Feb 13 15:27:30.890241 kernel: ... value mask: 0000ffffffffffff Feb 13 15:27:30.890273 kernel: ... max period: 00007fffffffffff Feb 13 15:27:30.890283 kernel: ... fixed-purpose events: 0 Feb 13 15:27:30.890301 kernel: ... event mask: 000000000000003f Feb 13 15:27:30.890320 kernel: signal: max sigframe size: 1776 Feb 13 15:27:30.890336 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:27:30.890351 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:27:30.890366 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:27:30.890384 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:27:30.890406 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:27:30.890421 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:27:30.890436 kernel: smpboot: Max logical packages: 1 Feb 13 15:27:30.890451 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:27:30.890466 kernel: devtmpfs: initialized Feb 13 15:27:30.890474 kernel: x86/mm: Memory block size: 128MB Feb 13 15:27:30.890496 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:27:30.890503 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:27:30.890514 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:27:30.890522 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:27:30.890529 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:27:30.890537 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:27:30.890544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:27:30.890552 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:27:30.890559 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:27:30.890567 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:27:30.890574 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:27:30.890584 kernel: audit: type=2000 audit(1739460450.774:1): state=initialized audit_enabled=0 res=1 Feb 13 15:27:30.890592 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:27:30.890599 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:27:30.890607 kernel: cpuidle: using governor menu Feb 13 15:27:30.890614 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:27:30.890621 kernel: dca service started, version 1.12.1 Feb 13 15:27:30.890629 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:27:30.890636 kernel: PCI: Using configuration type 1 for base access Feb 13 15:27:30.890644 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:27:30.890654 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:27:30.890661 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:27:30.890669 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:27:30.890676 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:27:30.890683 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:27:30.890691 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:27:30.890698 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:27:30.890705 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:27:30.890713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:27:30.890722 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:27:30.890730 kernel: ACPI: Interpreter enabled Feb 13 15:27:30.890737 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:27:30.890744 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:27:30.890752 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:27:30.890759 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:27:30.890767 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:27:30.890774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:27:30.890959 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:27:30.891106 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:27:30.891238 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:27:30.891260 kernel: PCI host bridge to bus 0000:00 Feb 13 15:27:30.891387 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:27:30.891497 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:27:30.891604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:27:30.891763 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:27:30.891928 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:27:30.892061 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:27:30.892188 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:27:30.892352 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:27:30.892484 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:27:30.892604 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:27:30.892727 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:27:30.892844 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:27:30.892992 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:27:30.893110 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:27:30.893237 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:27:30.893434 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:27:30.893568 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:27:30.893686 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:27:30.893813 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:27:30.893944 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:27:30.894062 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:27:30.894179 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:27:30.894388 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:27:30.894516 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:27:30.894633 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:27:30.894750 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:27:30.894876 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:27:30.895016 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:27:30.895148 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:27:30.895292 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:27:30.895417 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:27:30.895536 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:27:30.895661 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:27:30.895779 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:27:30.895789 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:27:30.895797 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:27:30.895804 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:27:30.895815 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:27:30.895823 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:27:30.895830 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:27:30.895838 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:27:30.895845 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:27:30.895852 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:27:30.895860 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:27:30.895876 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:27:30.895883 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:27:30.895893 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:27:30.895900 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:27:30.895909 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:27:30.895916 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:27:30.895923 kernel: iommu: Default domain type: Translated Feb 13 15:27:30.895931 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:27:30.895938 kernel: efivars: Registered efivars operations Feb 13 15:27:30.895946 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:27:30.895953 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:27:30.895963 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:27:30.895970 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:27:30.895977 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:27:30.895984 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:27:30.895992 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:27:30.895999 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:27:30.896007 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:27:30.896014 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:27:30.896135 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:27:30.896272 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:27:30.896394 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:27:30.896404 kernel: vgaarb: loaded Feb 13 15:27:30.896411 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:27:30.896419 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:27:30.896427 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:27:30.896434 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:27:30.896442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:27:30.896453 kernel: pnp: PnP ACPI init Feb 13 15:27:30.896585 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:27:30.896596 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:27:30.896603 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:27:30.896611 kernel: NET: Registered PF_INET protocol family Feb 13 15:27:30.896636 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:27:30.896646 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:27:30.896654 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:27:30.896664 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:27:30.896672 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:27:30.896680 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:27:30.896687 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:27:30.896695 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:27:30.896703 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:27:30.896711 kernel: NET: Registered PF_XDP protocol family Feb 13 15:27:30.896833 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:27:30.896964 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:27:30.897080 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:27:30.897189 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:27:30.897318 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:27:30.897429 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:27:30.897537 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:27:30.897646 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:27:30.897656 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:27:30.897664 kernel: Initialise system trusted keyrings Feb 13 15:27:30.897675 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:27:30.897683 kernel: Key type asymmetric registered Feb 13 15:27:30.897691 kernel: Asymmetric key parser 'x509' registered Feb 13 15:27:30.897698 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:27:30.897706 kernel: io scheduler mq-deadline registered Feb 13 15:27:30.897714 kernel: io scheduler kyber registered Feb 13 15:27:30.897722 kernel: io scheduler bfq registered Feb 13 15:27:30.897730 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:27:30.897738 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:27:30.897748 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:27:30.897758 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:27:30.897766 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:27:30.897774 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:27:30.897782 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:27:30.897790 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:27:30.897800 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:27:30.897933 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:27:30.897945 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:27:30.898055 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:27:30.898166 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:27:30 UTC (1739460450) Feb 13 15:27:30.898292 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:27:30.898303 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:27:30.898315 kernel: efifb: probing for efifb Feb 13 15:27:30.898324 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:27:30.898332 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:27:30.898339 kernel: efifb: scrolling: redraw Feb 13 15:27:30.898347 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:27:30.898358 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:27:30.898365 kernel: fb0: EFI VGA frame buffer device Feb 13 15:27:30.898373 kernel: pstore: Using crash dump compression: deflate Feb 13 15:27:30.898381 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:27:30.898389 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:27:30.898399 kernel: Segment Routing with IPv6 Feb 13 15:27:30.898407 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:27:30.898415 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:27:30.898422 kernel: Key type dns_resolver registered Feb 13 15:27:30.898430 kernel: IPI shorthand broadcast: enabled Feb 13 15:27:30.898438 kernel: sched_clock: Marking stable (585002745, 153944363)->(784088035, -45140927) Feb 13 15:27:30.898446 kernel: registered taskstats version 1 Feb 13 15:27:30.898454 kernel: Loading compiled-in X.509 certificates Feb 13 15:27:30.898462 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:27:30.898472 kernel: Key type .fscrypt registered Feb 13 15:27:30.898480 kernel: Key type fscrypt-provisioning registered Feb 13 15:27:30.898488 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:27:30.898495 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:27:30.898503 kernel: ima: No architecture policies found Feb 13 15:27:30.898511 kernel: clk: Disabling unused clocks Feb 13 15:27:30.898519 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:27:30.898527 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:27:30.898537 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:27:30.898545 kernel: Run /init as init process Feb 13 15:27:30.898553 kernel: with arguments: Feb 13 15:27:30.898561 kernel: /init Feb 13 15:27:30.898568 kernel: with environment: Feb 13 15:27:30.898576 kernel: HOME=/ Feb 13 15:27:30.898584 kernel: TERM=linux Feb 13 15:27:30.898591 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:27:30.898601 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:27:30.898613 systemd[1]: Detected virtualization kvm. Feb 13 15:27:30.898622 systemd[1]: Detected architecture x86-64. Feb 13 15:27:30.898630 systemd[1]: Running in initrd. Feb 13 15:27:30.898638 systemd[1]: No hostname configured, using default hostname. Feb 13 15:27:30.898646 systemd[1]: Hostname set to . Feb 13 15:27:30.898655 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:27:30.898663 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:27:30.898672 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:30.898682 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:30.898691 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:27:30.898700 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:27:30.898708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:27:30.898717 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:27:30.898727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:27:30.898737 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:27:30.898746 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:30.898754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:30.898763 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:27:30.898771 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:27:30.898779 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:27:30.898787 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:27:30.898796 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:27:30.898804 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:27:30.898815 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:27:30.898823 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:27:30.898831 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:30.898840 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:30.898848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:30.898857 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:27:30.898872 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:27:30.898880 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:27:30.898890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:27:30.898899 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:27:30.898908 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:27:30.898916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:27:30.898924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:30.898933 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:27:30.898941 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:30.898949 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:27:30.898976 systemd-journald[193]: Collecting audit messages is disabled. Feb 13 15:27:30.898998 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:27:30.899006 systemd-journald[193]: Journal started Feb 13 15:27:30.899024 systemd-journald[193]: Runtime Journal (/run/log/journal/9d578316547f4125a57f3bedfa8c2a86) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:27:30.892160 systemd-modules-load[194]: Inserted module 'overlay' Feb 13 15:27:30.902646 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:27:30.903234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:30.905893 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:27:30.912393 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:30.915786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:27:30.922150 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:27:30.925003 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:27:30.929274 kernel: Bridge firewalling registered Feb 13 15:27:30.928992 systemd-modules-load[194]: Inserted module 'br_netfilter' Feb 13 15:27:30.930611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:30.932480 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:27:30.933771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:30.936389 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:27:30.940889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:30.944228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:30.952292 dracut-cmdline[225]: dracut-dracut-053 Feb 13 15:27:30.954454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:30.957964 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:27:30.963392 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:27:30.992468 systemd-resolved[240]: Positive Trust Anchors: Feb 13 15:27:30.992483 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:27:30.992514 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:27:30.994977 systemd-resolved[240]: Defaulting to hostname 'linux'. Feb 13 15:27:30.995986 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:27:31.002591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:31.059280 kernel: SCSI subsystem initialized Feb 13 15:27:31.068271 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:27:31.079274 kernel: iscsi: registered transport (tcp) Feb 13 15:27:31.101285 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:27:31.101358 kernel: QLogic iSCSI HBA Driver Feb 13 15:27:31.152928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:27:31.159418 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:27:31.183482 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:27:31.183510 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:27:31.184615 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:27:31.226287 kernel: raid6: avx2x4 gen() 30508 MB/s Feb 13 15:27:31.243279 kernel: raid6: avx2x2 gen() 31138 MB/s Feb 13 15:27:31.260338 kernel: raid6: avx2x1 gen() 25889 MB/s Feb 13 15:27:31.260366 kernel: raid6: using algorithm avx2x2 gen() 31138 MB/s Feb 13 15:27:31.278355 kernel: raid6: .... xor() 19835 MB/s, rmw enabled Feb 13 15:27:31.278397 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:27:31.298279 kernel: xor: automatically using best checksumming function avx Feb 13 15:27:31.454287 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:27:31.467894 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:27:31.475428 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:31.488999 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 15:27:31.493638 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:31.501419 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:27:31.515381 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Feb 13 15:27:31.548358 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:27:31.559407 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:27:31.623968 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:31.631388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:27:31.646693 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:27:31.649681 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:27:31.652281 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:31.654638 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:27:31.662356 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:27:31.686936 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:27:31.686955 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:27:31.687104 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:27:31.687116 kernel: AES CTR mode by8 optimization enabled Feb 13 15:27:31.687127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:27:31.687138 kernel: libata version 3.00 loaded. Feb 13 15:27:31.687156 kernel: GPT:9289727 != 19775487 Feb 13 15:27:31.687166 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:27:31.687179 kernel: GPT:9289727 != 19775487 Feb 13 15:27:31.687190 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:27:31.687200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:31.662551 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:27:31.678568 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:27:31.708280 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:27:31.733521 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:27:31.733541 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:27:31.733692 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:27:31.733851 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (474) Feb 13 15:27:31.733870 kernel: scsi host0: ahci Feb 13 15:27:31.734024 kernel: scsi host1: ahci Feb 13 15:27:31.734174 kernel: scsi host2: ahci Feb 13 15:27:31.734342 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 15:27:31.734353 kernel: scsi host3: ahci Feb 13 15:27:31.734492 kernel: scsi host4: ahci Feb 13 15:27:31.734645 kernel: scsi host5: ahci Feb 13 15:27:31.734793 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:27:31.734804 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:27:31.734814 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:27:31.734824 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:27:31.734834 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:27:31.734852 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:27:31.726507 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:27:31.742260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:27:31.748329 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:27:31.750890 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:27:31.758659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:27:31.770368 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:27:31.771513 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:27:31.772745 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:31.775222 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:31.779274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:31.779339 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:31.783526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:31.783549 disk-uuid[563]: Primary Header is updated. Feb 13 15:27:31.783549 disk-uuid[563]: Secondary Entries is updated. Feb 13 15:27:31.783549 disk-uuid[563]: Secondary Header is updated. Feb 13 15:27:31.783527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:31.789267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:31.800627 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:31.820189 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:31.846491 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:27:31.866049 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:32.042360 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:27:32.042435 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:27:32.042446 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:27:32.043923 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:27:32.044020 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:27:32.045278 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:27:32.046282 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:27:32.046294 kernel: ata3.00: applying bridge limits Feb 13 15:27:32.047270 kernel: ata3.00: configured for UDMA/100 Feb 13 15:27:32.049269 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:27:32.086275 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:27:32.098899 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:27:32.098919 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:27:32.791286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:27:32.791916 disk-uuid[564]: The operation has completed successfully. Feb 13 15:27:32.818336 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:27:32.818454 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:27:32.842470 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:27:32.845820 sh[593]: Success Feb 13 15:27:32.858284 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:27:32.890707 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:27:32.903679 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:27:32.908228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:27:32.917873 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:27:32.917904 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:27:32.917916 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:27:32.918970 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:27:32.919715 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:27:32.925148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:27:32.927383 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:27:32.943363 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:27:32.945897 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:27:32.955848 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:27:32.955881 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:27:32.955892 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:32.959268 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:32.969578 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:27:32.971513 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:27:32.981312 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:27:32.990504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:27:33.046363 ignition[692]: Ignition 2.20.0 Feb 13 15:27:33.046376 ignition[692]: Stage: fetch-offline Feb 13 15:27:33.046420 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:33.046430 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:33.046520 ignition[692]: parsed url from cmdline: "" Feb 13 15:27:33.046524 ignition[692]: no config URL provided Feb 13 15:27:33.046529 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:27:33.046537 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:27:33.046566 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 15:27:33.046571 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:27:33.057314 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 15:27:33.057343 ignition[692]: QEMU firmware config was not found. Ignoring... Feb 13 15:27:33.069158 ignition[692]: parsing config with SHA512: e8e398fdbe45ef9b24ba10428be17883ae249071c9cf99bbbfd3a01a85a8c82fd589bec4597accf2e778a87a66e2714c9f9657578b789fc5ae807205c40ed566 Feb 13 15:27:33.071823 unknown[692]: fetched base config from "system" Feb 13 15:27:33.072060 ignition[692]: fetch-offline: fetch-offline passed Feb 13 15:27:33.071837 unknown[692]: fetched user config from "qemu" Feb 13 15:27:33.072127 ignition[692]: Ignition finished successfully Feb 13 15:27:33.074444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:27:33.089257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:27:33.098520 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:27:33.120756 systemd-networkd[785]: lo: Link UP Feb 13 15:27:33.120768 systemd-networkd[785]: lo: Gained carrier Feb 13 15:27:33.122444 systemd-networkd[785]: Enumeration completed Feb 13 15:27:33.122547 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:27:33.122829 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:33.122834 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:27:33.123086 systemd[1]: Reached target network.target - Network. Feb 13 15:27:33.123646 systemd-networkd[785]: eth0: Link UP Feb 13 15:27:33.123650 systemd-networkd[785]: eth0: Gained carrier Feb 13 15:27:33.123656 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:33.124061 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:27:33.130532 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:27:33.144772 ignition[787]: Ignition 2.20.0 Feb 13 15:27:33.144786 ignition[787]: Stage: kargs Feb 13 15:27:33.145021 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:33.145035 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:33.147338 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:27:33.145843 ignition[787]: kargs: kargs passed Feb 13 15:27:33.145899 ignition[787]: Ignition finished successfully Feb 13 15:27:33.154981 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:27:33.171582 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:27:33.184094 ignition[796]: Ignition 2.20.0 Feb 13 15:27:33.184107 ignition[796]: Stage: disks Feb 13 15:27:33.184319 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:33.184331 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:33.184978 ignition[796]: disks: disks passed Feb 13 15:27:33.185022 ignition[796]: Ignition finished successfully Feb 13 15:27:33.191966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:27:33.192936 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:27:33.194678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:27:33.197116 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:27:33.199820 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:27:33.201996 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:27:33.217526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:27:33.233926 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:27:33.241040 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:27:33.259455 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:27:33.348281 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:27:33.348841 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:27:33.351030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:27:33.364381 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:27:33.367209 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:27:33.370399 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:27:33.373607 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Feb 13 15:27:33.370459 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:27:33.381363 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:27:33.381396 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:27:33.381412 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:33.381426 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:33.370491 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:27:33.383073 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:27:33.386097 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:27:33.389787 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:27:33.428337 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:27:33.432112 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:27:33.435936 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:27:33.439365 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:27:33.525867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:27:33.537349 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:27:33.538508 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:27:33.549276 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:27:33.563470 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:27:33.577270 ignition[929]: INFO : Ignition 2.20.0 Feb 13 15:27:33.577270 ignition[929]: INFO : Stage: mount Feb 13 15:27:33.579082 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:33.579082 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:33.582474 ignition[929]: INFO : mount: mount passed Feb 13 15:27:33.583384 ignition[929]: INFO : Ignition finished successfully Feb 13 15:27:33.586617 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:27:33.600361 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:27:33.917324 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:27:33.934517 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:27:33.941285 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 15:27:33.943389 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:27:33.943419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:27:33.943433 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:27:33.947281 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:27:33.948562 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:27:33.983650 ignition[959]: INFO : Ignition 2.20.0 Feb 13 15:27:33.983650 ignition[959]: INFO : Stage: files Feb 13 15:27:33.985560 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:33.985560 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:33.988191 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:27:33.989568 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:27:33.989568 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:27:33.993019 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:27:33.994721 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:27:33.994721 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:27:33.993679 unknown[959]: wrote ssh authorized keys file for user: core Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:27:33.999734 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Feb 13 15:27:34.339883 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:27:34.355446 systemd-networkd[785]: eth0: Gained IPv6LL Feb 13 15:27:34.614374 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Feb 13 15:27:34.614374 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 15:27:34.618194 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:34.618194 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:27:34.618194 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 15:27:34.618194 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:34.637165 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:34.642215 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:27:34.644301 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:27:34.644301 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:34.644301 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:27:34.644301 ignition[959]: INFO : files: files passed Feb 13 15:27:34.644301 ignition[959]: INFO : Ignition finished successfully Feb 13 15:27:34.645380 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:27:34.654512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:27:34.657394 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:27:34.659803 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:27:34.659958 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:27:34.666719 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:27:34.669888 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:34.669888 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:34.675012 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:27:34.673295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:34.675275 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:27:34.687443 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:27:34.711490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:27:34.711652 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:27:34.714379 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:27:34.717136 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:27:34.717784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:27:34.728590 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:27:34.745118 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:34.759442 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:27:34.770844 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:34.772344 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:34.774665 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:27:34.777051 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:27:34.777234 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:27:34.779794 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:27:34.781869 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:27:34.784349 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:27:34.786727 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:27:34.788883 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:27:34.791042 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:27:34.793147 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:27:34.795487 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:27:34.797537 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:27:34.799809 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:27:34.801649 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:27:34.801842 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:27:34.804155 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:34.805617 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:34.807700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:27:34.807907 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:34.809964 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:27:34.810115 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:27:34.812297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:27:34.812461 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:27:34.814480 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:27:34.816197 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:27:34.820349 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:34.821959 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:27:34.824128 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:27:34.826073 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:27:34.826200 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:27:34.828389 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:27:34.828503 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:27:34.831093 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:27:34.831239 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:27:34.833454 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:27:34.833593 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:27:34.842447 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:27:34.844161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:27:34.844344 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:34.847313 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:27:34.848548 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:27:34.848724 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:34.856259 ignition[1015]: INFO : Ignition 2.20.0 Feb 13 15:27:34.856259 ignition[1015]: INFO : Stage: umount Feb 13 15:27:34.856259 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:27:34.856259 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:27:34.851648 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:27:34.864072 ignition[1015]: INFO : umount: umount passed Feb 13 15:27:34.864072 ignition[1015]: INFO : Ignition finished successfully Feb 13 15:27:34.851967 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:27:34.858194 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:27:34.858366 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:27:34.860946 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:27:34.861084 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:27:34.865321 systemd[1]: Stopped target network.target - Network. Feb 13 15:27:34.866749 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:27:34.866826 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:27:34.869010 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:27:34.869067 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:27:34.871018 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:27:34.871073 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:27:34.873169 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:27:34.873218 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:27:34.875304 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:27:34.877320 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:27:34.880066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:27:34.882340 systemd-networkd[785]: eth0: DHCPv6 lease lost Feb 13 15:27:34.884627 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:27:34.884820 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:27:34.887290 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:27:34.887446 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:27:34.891108 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:27:34.891166 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:34.902372 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:27:34.903345 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:27:34.903416 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:27:34.905615 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:27:34.905682 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:34.907892 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:27:34.907958 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:34.910536 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:27:34.910601 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:34.912989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:34.933475 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:27:34.933665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:34.936007 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:27:34.936055 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:34.937737 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:27:34.937788 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:34.938004 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:27:34.938050 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:27:34.938919 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:27:34.938963 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:27:34.939440 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:27:34.939485 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:27:35.007493 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:27:35.008064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:27:35.008151 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:35.008651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:35.008717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:35.009468 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:27:35.009622 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:27:35.029507 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:27:35.029698 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:27:35.252398 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:27:35.252537 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:27:35.254731 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:27:35.255077 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:27:35.255133 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:27:35.280539 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:27:35.288636 systemd[1]: Switching root. Feb 13 15:27:35.321878 systemd-journald[193]: Journal stopped Feb 13 15:27:36.290353 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Feb 13 15:27:36.290422 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:27:36.290436 kernel: SELinux: policy capability open_perms=1 Feb 13 15:27:36.290451 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:27:36.290462 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:27:36.290474 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:27:36.290486 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:27:36.290497 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:27:36.290512 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:27:36.290523 kernel: audit: type=1403 audit(1739460455.568:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:27:36.290535 systemd[1]: Successfully loaded SELinux policy in 38.465ms. Feb 13 15:27:36.290563 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.765ms. Feb 13 15:27:36.290581 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:27:36.290593 systemd[1]: Detected virtualization kvm. Feb 13 15:27:36.290605 systemd[1]: Detected architecture x86-64. Feb 13 15:27:36.290616 systemd[1]: Detected first boot. Feb 13 15:27:36.290628 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:27:36.290640 zram_generator::config[1059]: No configuration found. Feb 13 15:27:36.290658 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:27:36.290670 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:27:36.290684 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:27:36.290699 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:36.290711 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:27:36.290727 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:27:36.290750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:27:36.290765 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:27:36.290779 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:27:36.290791 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:27:36.290803 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:27:36.290815 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:27:36.290827 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:27:36.290840 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:27:36.290852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:27:36.290864 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:27:36.290878 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:27:36.290890 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:27:36.290903 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:27:36.290915 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:27:36.290926 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:27:36.290938 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:27:36.290950 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:27:36.290964 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:27:36.290976 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:27:36.290988 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:27:36.291000 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:27:36.291012 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:27:36.291025 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:27:36.291039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:27:36.291050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:27:36.291062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:27:36.291074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:27:36.291089 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:27:36.291101 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:27:36.291112 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:27:36.291124 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:27:36.291136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:36.291148 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:27:36.291160 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:27:36.291171 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:27:36.291186 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:27:36.291198 systemd[1]: Reached target machines.target - Containers. Feb 13 15:27:36.291210 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:27:36.291222 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:36.291234 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:27:36.291257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:27:36.291270 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:36.291282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:36.291294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:36.291309 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:27:36.291322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:36.291334 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:27:36.291346 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:27:36.291358 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:27:36.291370 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:27:36.291382 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:27:36.291393 kernel: loop: module loaded Feb 13 15:27:36.291407 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:27:36.291418 kernel: fuse: init (API version 7.39) Feb 13 15:27:36.291430 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:27:36.291442 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:27:36.291454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:27:36.291465 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:27:36.291477 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:27:36.291488 systemd[1]: Stopped verity-setup.service. Feb 13 15:27:36.291501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:36.291515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:27:36.291545 systemd-journald[1129]: Collecting audit messages is disabled. Feb 13 15:27:36.291566 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:27:36.291579 systemd-journald[1129]: Journal started Feb 13 15:27:36.291601 systemd-journald[1129]: Runtime Journal (/run/log/journal/9d578316547f4125a57f3bedfa8c2a86) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:27:36.066342 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:27:36.082176 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:27:36.082615 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:27:36.293856 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:27:36.294718 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:27:36.296041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:27:36.297324 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:27:36.298579 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:27:36.299840 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:27:36.301482 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:27:36.303152 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:27:36.303337 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:27:36.305007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:36.305177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:36.306688 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:36.306972 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:36.308665 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:27:36.308875 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:27:36.310276 kernel: ACPI: bus type drm_connector registered Feb 13 15:27:36.310872 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:36.311043 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:36.312968 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:36.313138 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:36.314653 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:27:36.316276 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:27:36.317964 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:27:36.334926 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:27:36.346352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:27:36.348684 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:27:36.349989 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:27:36.350090 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:27:36.352202 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:27:36.354671 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:27:36.356947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:27:36.358227 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:36.360409 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:27:36.364526 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:27:36.365863 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:36.367549 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:27:36.368801 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:36.370402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:27:36.376595 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:27:36.381446 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:27:36.385692 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:27:36.387149 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:27:36.389664 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:27:36.396750 systemd-journald[1129]: Time spent on flushing to /var/log/journal/9d578316547f4125a57f3bedfa8c2a86 is 13.898ms for 1026 entries. Feb 13 15:27:36.396750 systemd-journald[1129]: System Journal (/var/log/journal/9d578316547f4125a57f3bedfa8c2a86) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:27:36.520145 systemd-journald[1129]: Received client request to flush runtime journal. Feb 13 15:27:36.520406 kernel: loop0: detected capacity change from 0 to 211296 Feb 13 15:27:36.520496 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:27:36.409517 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:27:36.411653 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:27:36.415443 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:27:36.416962 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:27:36.421371 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:27:36.431517 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:27:36.512622 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 15:27:36.524922 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:27:36.536845 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:27:36.538649 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:27:36.540717 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:27:36.550281 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:27:36.553418 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:27:36.577818 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:27:36.577839 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Feb 13 15:27:36.584963 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:27:36.585278 kernel: loop2: detected capacity change from 0 to 140992 Feb 13 15:27:36.632286 kernel: loop3: detected capacity change from 0 to 211296 Feb 13 15:27:36.641277 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:27:36.651291 kernel: loop5: detected capacity change from 0 to 140992 Feb 13 15:27:36.660341 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:27:36.660956 (sd-merge)[1200]: Merged extensions into '/usr'. Feb 13 15:27:36.667007 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:27:36.667025 systemd[1]: Reloading... Feb 13 15:27:36.748275 zram_generator::config[1233]: No configuration found. Feb 13 15:27:36.857384 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:27:36.882587 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:36.936941 systemd[1]: Reloading finished in 269 ms. Feb 13 15:27:36.986112 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:27:36.987834 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:27:37.001409 systemd[1]: Starting ensure-sysext.service... Feb 13 15:27:37.003563 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:27:37.009114 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:27:37.009134 systemd[1]: Reloading... Feb 13 15:27:37.046201 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:27:37.046762 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:27:37.048597 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:27:37.051798 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Feb 13 15:27:37.051983 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Feb 13 15:27:37.057047 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:37.057204 systemd-tmpfiles[1264]: Skipping /boot Feb 13 15:27:37.078988 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:27:37.079132 systemd-tmpfiles[1264]: Skipping /boot Feb 13 15:27:37.132229 zram_generator::config[1293]: No configuration found. Feb 13 15:27:37.249523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:37.304697 systemd[1]: Reloading finished in 295 ms. Feb 13 15:27:37.331278 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:27:37.342863 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:27:37.352448 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:37.355243 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:27:37.357861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:27:37.362563 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:27:37.366547 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:27:37.373333 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:27:37.376996 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.377165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:37.378361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:37.381986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:37.386506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:37.387661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:37.394172 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:27:37.395600 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.396941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:37.397726 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:37.399895 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:37.400834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:37.402861 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:37.403558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:37.409082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:27:37.415488 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Feb 13 15:27:37.419491 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:27:37.423611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.423844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:37.431740 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:37.433496 augenrules[1364]: No rules Feb 13 15:27:37.436118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:37.439544 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:27:37.441111 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:37.445755 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:27:37.447402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.448319 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:27:37.451874 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:37.452193 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:37.454555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:37.454897 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:37.458931 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:27:37.461374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:37.461627 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:37.476532 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:27:37.476845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:27:37.479165 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:27:37.481751 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:27:37.490962 systemd[1]: Finished ensure-sysext.service. Feb 13 15:27:37.497632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.503425 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:37.504774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:27:37.506216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:27:37.510978 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:27:37.516519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:27:37.518519 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:27:37.530348 augenrules[1403]: /sbin/augenrules: No change Feb 13 15:27:37.540069 augenrules[1425]: No rules Feb 13 15:27:37.539505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:27:37.545847 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:27:37.547628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:27:37.547659 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:27:37.548309 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:37.548518 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:37.550644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:27:37.550896 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:27:37.553656 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:27:37.553901 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:27:37.556676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:27:37.556879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:27:37.568985 systemd-resolved[1332]: Positive Trust Anchors: Feb 13 15:27:37.569234 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:27:37.569509 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:27:37.569634 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:27:37.571363 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:27:37.571436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:27:37.576274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1376) Feb 13 15:27:37.646815 systemd-resolved[1332]: Defaulting to hostname 'linux'. Feb 13 15:27:37.655234 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:27:37.656649 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:27:37.712441 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:27:37.712939 systemd-networkd[1413]: lo: Link UP Feb 13 15:27:37.713223 systemd-networkd[1413]: lo: Gained carrier Feb 13 15:27:37.716013 systemd-networkd[1413]: Enumeration completed Feb 13 15:27:37.716615 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:37.716619 systemd-networkd[1413]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:27:37.718322 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:27:37.718418 systemd-networkd[1413]: eth0: Link UP Feb 13 15:27:37.718551 systemd-networkd[1413]: eth0: Gained carrier Feb 13 15:27:37.718613 systemd-networkd[1413]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:27:37.720514 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:27:37.722450 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:27:37.724676 systemd[1]: Reached target network.target - Network. Feb 13 15:27:37.729431 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:27:37.731280 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:27:37.731464 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:27:37.731653 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:27:37.734552 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:27:37.739257 systemd-networkd[1413]: eth0: DHCPv4 address 10.0.0.78/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:27:37.744386 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:27:37.743869 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:27:38.662778 systemd-timesyncd[1430]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:27:38.662829 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-02-13 15:27:38.662678 UTC. Feb 13 15:27:38.662877 systemd-resolved[1332]: Clock change detected. Flushing caches. Feb 13 15:27:38.663788 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:27:38.673772 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:27:38.676508 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:27:38.709217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:38.721971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:27:38.722734 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:38.779853 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:27:38.783056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:27:38.792097 kernel: kvm_amd: TSC scaling supported Feb 13 15:27:38.792175 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:27:38.792197 kernel: kvm_amd: Nested Paging enabled Feb 13 15:27:38.793090 kernel: kvm_amd: LBR virtualization supported Feb 13 15:27:38.793106 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:27:38.794118 kernel: kvm_amd: Virtual GIF supported Feb 13 15:27:38.814768 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:27:38.849522 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:27:38.867017 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:27:38.868707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:27:38.875934 lvm[1460]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:38.917251 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:27:38.918886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:27:38.920078 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:27:38.921277 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:27:38.922734 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:27:38.924432 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:27:38.925639 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:27:38.926910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:27:38.928212 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:27:38.928235 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:27:38.929142 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:27:38.930986 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:27:38.934153 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:27:38.946175 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:27:38.949462 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:27:38.951503 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:27:38.952940 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:27:38.954132 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:27:38.955384 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:38.955421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:27:38.956647 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:27:38.959339 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:27:38.961828 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:27:38.963101 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:27:38.966421 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:27:38.967710 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:27:38.973368 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:27:38.976690 jq[1468]: false Feb 13 15:27:38.976986 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:27:38.980894 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:27:38.988709 extend-filesystems[1469]: Found loop3 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found loop4 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found loop5 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found sr0 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda1 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda2 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda3 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found usr Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda4 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda6 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda7 Feb 13 15:27:38.990077 extend-filesystems[1469]: Found vda9 Feb 13 15:27:38.990077 extend-filesystems[1469]: Checking size of /dev/vda9 Feb 13 15:27:39.043508 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Feb 13 15:27:39.043544 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:27:39.043559 extend-filesystems[1469]: Resized partition /dev/vda9 Feb 13 15:27:38.991326 dbus-daemon[1467]: [system] SELinux support is enabled Feb 13 15:27:38.992996 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:27:39.045333 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:27:38.996605 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:27:38.997151 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:27:39.051473 update_engine[1483]: I20250213 15:27:39.018940 1483 main.cc:92] Flatcar Update Engine starting Feb 13 15:27:39.051473 update_engine[1483]: I20250213 15:27:39.022120 1483 update_check_scheduler.cc:74] Next update check in 5m8s Feb 13 15:27:39.002542 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:27:39.051877 jq[1486]: true Feb 13 15:27:39.006848 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:27:39.013244 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:27:39.052244 jq[1491]: true Feb 13 15:27:39.019275 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:27:39.029942 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:27:39.030213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:27:39.030613 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:27:39.030902 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:27:39.034312 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:27:39.034571 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:27:39.057852 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:27:39.055660 (ntainerd)[1495]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:27:39.080732 systemd-logind[1476]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:27:39.171413 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:27:39.171413 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:27:39.171413 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:27:39.180606 bash[1517]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:27:39.080804 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:27:39.180791 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Feb 13 15:27:39.082412 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:27:39.083839 systemd-logind[1476]: New seat seat0. Feb 13 15:27:39.085460 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:27:39.085760 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:27:39.090766 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:27:39.094511 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:27:39.094791 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:27:39.096651 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:27:39.096808 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:27:39.171430 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:27:39.183799 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:27:39.202007 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:27:39.205908 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:27:39.210230 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:27:39.434519 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:27:39.515897 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:27:39.526159 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:27:39.528952 systemd[1]: Started sshd@0-10.0.0.78:22-10.0.0.1:51316.service - OpenSSH per-connection server daemon (10.0.0.1:51316). Feb 13 15:27:39.533484 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:27:39.533722 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:27:39.540101 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:27:39.564191 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:27:39.599312 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:27:39.602004 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:27:39.603467 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:27:39.620039 containerd[1495]: time="2025-02-13T15:27:39.619932024Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:27:39.629428 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 51316 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:39.631635 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:39.641295 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:27:39.646172 containerd[1495]: time="2025-02-13T15:27:39.646100239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.647732520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.647789868Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.647809875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.647990834Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648012465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648085542Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648097675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648303160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648318289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648331644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648754 containerd[1495]: time="2025-02-13T15:27:39.648340891Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648967 containerd[1495]: time="2025-02-13T15:27:39.648434346Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.648967 containerd[1495]: time="2025-02-13T15:27:39.648693162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:27:39.649114 containerd[1495]: time="2025-02-13T15:27:39.649096428Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:27:39.649181 containerd[1495]: time="2025-02-13T15:27:39.649157132Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:27:39.649328 containerd[1495]: time="2025-02-13T15:27:39.649312633Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:27:39.649433 containerd[1495]: time="2025-02-13T15:27:39.649416769Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:27:39.650996 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:27:39.653966 systemd-logind[1476]: New session 1 of user core. Feb 13 15:27:39.695607 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:27:39.712004 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:27:39.716152 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:27:39.748637 containerd[1495]: time="2025-02-13T15:27:39.748532666Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:27:39.748637 containerd[1495]: time="2025-02-13T15:27:39.748604090Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:27:39.748637 containerd[1495]: time="2025-02-13T15:27:39.748623086Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:27:39.748815 containerd[1495]: time="2025-02-13T15:27:39.748650297Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:27:39.748815 containerd[1495]: time="2025-02-13T15:27:39.748670986Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:27:39.748912 containerd[1495]: time="2025-02-13T15:27:39.748884155Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:27:39.749484 containerd[1495]: time="2025-02-13T15:27:39.749442773Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:27:39.749630 containerd[1495]: time="2025-02-13T15:27:39.749598124Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:27:39.749670 containerd[1495]: time="2025-02-13T15:27:39.749627500Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:27:39.749670 containerd[1495]: time="2025-02-13T15:27:39.749655051Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:27:39.749732 containerd[1495]: time="2025-02-13T15:27:39.749680058Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749732 containerd[1495]: time="2025-02-13T15:27:39.749703953Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749732 containerd[1495]: time="2025-02-13T15:27:39.749727607Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749825 containerd[1495]: time="2025-02-13T15:27:39.749765268Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749825 containerd[1495]: time="2025-02-13T15:27:39.749791467Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749825 containerd[1495]: time="2025-02-13T15:27:39.749815402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749910 containerd[1495]: time="2025-02-13T15:27:39.749835960Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749910 containerd[1495]: time="2025-02-13T15:27:39.749863492Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:27:39.749910 containerd[1495]: time="2025-02-13T15:27:39.749899950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.749984 containerd[1495]: time="2025-02-13T15:27:39.749927662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.749984 containerd[1495]: time="2025-02-13T15:27:39.749950135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.749984 containerd[1495]: time="2025-02-13T15:27:39.749972126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750075 containerd[1495]: time="2025-02-13T15:27:39.749993776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750075 containerd[1495]: time="2025-02-13T15:27:39.750012512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750075 containerd[1495]: time="2025-02-13T15:27:39.750033761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750075 containerd[1495]: time="2025-02-13T15:27:39.750055352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750194 containerd[1495]: time="2025-02-13T15:27:39.750084647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750194 containerd[1495]: time="2025-02-13T15:27:39.750110415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750194 containerd[1495]: time="2025-02-13T15:27:39.750132216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750194 containerd[1495]: time="2025-02-13T15:27:39.750152995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750194 containerd[1495]: time="2025-02-13T15:27:39.750185225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750316 containerd[1495]: time="2025-02-13T15:27:39.750214370Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:27:39.750316 containerd[1495]: time="2025-02-13T15:27:39.750248033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750316 containerd[1495]: time="2025-02-13T15:27:39.750272960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750316 containerd[1495]: time="2025-02-13T15:27:39.750294040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:27:39.750539 containerd[1495]: time="2025-02-13T15:27:39.750500617Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:27:39.750570 containerd[1495]: time="2025-02-13T15:27:39.750548587Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:27:39.750607 containerd[1495]: time="2025-02-13T15:27:39.750567112Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:27:39.750607 containerd[1495]: time="2025-02-13T15:27:39.750589924Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:27:39.750662 containerd[1495]: time="2025-02-13T15:27:39.750610353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.750662 containerd[1495]: time="2025-02-13T15:27:39.750641792Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:27:39.750710 containerd[1495]: time="2025-02-13T15:27:39.750665556Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:27:39.750710 containerd[1495]: time="2025-02-13T15:27:39.750680905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:27:39.751176 containerd[1495]: time="2025-02-13T15:27:39.751106203Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:27:39.751308 containerd[1495]: time="2025-02-13T15:27:39.751190000Z" level=info msg="Connect containerd service" Feb 13 15:27:39.751308 containerd[1495]: time="2025-02-13T15:27:39.751229814Z" level=info msg="using legacy CRI server" Feb 13 15:27:39.751308 containerd[1495]: time="2025-02-13T15:27:39.751239362Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:27:39.751460 containerd[1495]: time="2025-02-13T15:27:39.751433176Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:27:39.752348 containerd[1495]: time="2025-02-13T15:27:39.752314418Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:27:39.752681 containerd[1495]: time="2025-02-13T15:27:39.752637805Z" level=info msg="Start subscribing containerd event" Feb 13 15:27:39.752725 containerd[1495]: time="2025-02-13T15:27:39.752692658Z" level=info msg="Start recovering state" Feb 13 15:27:39.752771 containerd[1495]: time="2025-02-13T15:27:39.752719618Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:27:39.752848 containerd[1495]: time="2025-02-13T15:27:39.752809637Z" level=info msg="Start event monitor" Feb 13 15:27:39.752903 containerd[1495]: time="2025-02-13T15:27:39.752860122Z" level=info msg="Start snapshots syncer" Feb 13 15:27:39.752903 containerd[1495]: time="2025-02-13T15:27:39.752895588Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:27:39.752903 containerd[1495]: time="2025-02-13T15:27:39.752906799Z" level=info msg="Start streaming server" Feb 13 15:27:39.753182 containerd[1495]: time="2025-02-13T15:27:39.753146198Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:27:39.753383 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:27:39.755044 containerd[1495]: time="2025-02-13T15:27:39.754173124Z" level=info msg="containerd successfully booted in 0.136628s" Feb 13 15:27:39.847026 systemd[1554]: Queued start job for default target default.target. Feb 13 15:27:39.857085 systemd[1554]: Created slice app.slice - User Application Slice. Feb 13 15:27:39.857112 systemd[1554]: Reached target paths.target - Paths. Feb 13 15:27:39.857126 systemd[1554]: Reached target timers.target - Timers. Feb 13 15:27:39.858784 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:27:39.873605 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:27:39.873761 systemd[1554]: Reached target sockets.target - Sockets. Feb 13 15:27:39.873777 systemd[1554]: Reached target basic.target - Basic System. Feb 13 15:27:39.873818 systemd[1554]: Reached target default.target - Main User Target. Feb 13 15:27:39.873854 systemd[1554]: Startup finished in 149ms. Feb 13 15:27:39.874399 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:27:39.877134 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:27:39.940247 systemd[1]: Started sshd@1-10.0.0.78:22-10.0.0.1:51332.service - OpenSSH per-connection server daemon (10.0.0.1:51332). Feb 13 15:27:39.990401 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 51332 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:39.992176 sshd-session[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:39.996199 systemd-logind[1476]: New session 2 of user core. Feb 13 15:27:40.017868 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:27:40.073446 sshd[1569]: Connection closed by 10.0.0.1 port 51332 Feb 13 15:27:40.073859 sshd-session[1567]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:40.087892 systemd[1]: sshd@1-10.0.0.78:22-10.0.0.1:51332.service: Deactivated successfully. Feb 13 15:27:40.089940 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:27:40.093211 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:27:40.106186 systemd[1]: Started sshd@2-10.0.0.78:22-10.0.0.1:51348.service - OpenSSH per-connection server daemon (10.0.0.1:51348). Feb 13 15:27:40.108993 systemd-logind[1476]: Removed session 2. Feb 13 15:27:40.149452 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 51348 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:40.151294 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:40.155877 systemd-logind[1476]: New session 3 of user core. Feb 13 15:27:40.171984 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:27:40.228565 sshd[1576]: Connection closed by 10.0.0.1 port 51348 Feb 13 15:27:40.228881 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:40.232636 systemd[1]: sshd@2-10.0.0.78:22-10.0.0.1:51348.service: Deactivated successfully. Feb 13 15:27:40.234639 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:27:40.235282 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:27:40.236145 systemd-logind[1476]: Removed session 3. Feb 13 15:27:40.328914 systemd-networkd[1413]: eth0: Gained IPv6LL Feb 13 15:27:40.333079 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:27:40.334969 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:27:40.346969 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:27:40.349604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:40.351863 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:27:40.372530 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:27:40.372819 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:27:40.374452 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:27:40.376681 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:27:41.117758 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:41.119536 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:27:41.122056 systemd[1]: Startup finished in 718ms (kernel) + 4.866s (initrd) + 4.673s (userspace) = 10.258s. Feb 13 15:27:41.124054 (kubelet)[1602]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:27:41.648763 kubelet[1602]: E0213 15:27:41.648662 1602 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:27:41.653873 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:27:41.654084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:27:41.654431 systemd[1]: kubelet.service: Consumed 1.165s CPU time. Feb 13 15:27:50.239590 systemd[1]: Started sshd@3-10.0.0.78:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Feb 13 15:27:50.283752 sshd[1617]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:50.285461 sshd-session[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:50.289701 systemd-logind[1476]: New session 4 of user core. Feb 13 15:27:50.299894 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:27:50.355307 sshd[1619]: Connection closed by 10.0.0.1 port 34970 Feb 13 15:27:50.355734 sshd-session[1617]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:50.372875 systemd[1]: sshd@3-10.0.0.78:22-10.0.0.1:34970.service: Deactivated successfully. Feb 13 15:27:50.374838 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:27:50.376367 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:27:50.377857 systemd[1]: Started sshd@4-10.0.0.78:22-10.0.0.1:34974.service - OpenSSH per-connection server daemon (10.0.0.1:34974). Feb 13 15:27:50.378815 systemd-logind[1476]: Removed session 4. Feb 13 15:27:50.422037 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 34974 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:50.423636 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:50.428325 systemd-logind[1476]: New session 5 of user core. Feb 13 15:27:50.446020 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:27:50.497468 sshd[1626]: Connection closed by 10.0.0.1 port 34974 Feb 13 15:27:50.498240 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:50.514192 systemd[1]: sshd@4-10.0.0.78:22-10.0.0.1:34974.service: Deactivated successfully. Feb 13 15:27:50.516062 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:27:50.517559 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:27:50.519073 systemd[1]: Started sshd@5-10.0.0.78:22-10.0.0.1:34988.service - OpenSSH per-connection server daemon (10.0.0.1:34988). Feb 13 15:27:50.519875 systemd-logind[1476]: Removed session 5. Feb 13 15:27:50.564123 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 34988 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:50.565730 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:50.570290 systemd-logind[1476]: New session 6 of user core. Feb 13 15:27:50.579970 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:27:50.636703 sshd[1633]: Connection closed by 10.0.0.1 port 34988 Feb 13 15:27:50.637163 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:50.644386 systemd[1]: sshd@5-10.0.0.78:22-10.0.0.1:34988.service: Deactivated successfully. Feb 13 15:27:50.645956 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:27:50.647556 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:27:50.657041 systemd[1]: Started sshd@6-10.0.0.78:22-10.0.0.1:34996.service - OpenSSH per-connection server daemon (10.0.0.1:34996). Feb 13 15:27:50.658406 systemd-logind[1476]: Removed session 6. Feb 13 15:27:50.696173 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 34996 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:50.697457 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:50.701366 systemd-logind[1476]: New session 7 of user core. Feb 13 15:27:50.709865 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:27:50.768155 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:27:50.768568 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:50.787097 sudo[1641]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:50.788866 sshd[1640]: Connection closed by 10.0.0.1 port 34996 Feb 13 15:27:50.789616 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:50.798933 systemd[1]: sshd@6-10.0.0.78:22-10.0.0.1:34996.service: Deactivated successfully. Feb 13 15:27:50.801438 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:27:50.803686 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:27:50.816252 systemd[1]: Started sshd@7-10.0.0.78:22-10.0.0.1:35006.service - OpenSSH per-connection server daemon (10.0.0.1:35006). Feb 13 15:27:50.817378 systemd-logind[1476]: Removed session 7. Feb 13 15:27:50.855368 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 35006 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:50.856816 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:50.861485 systemd-logind[1476]: New session 8 of user core. Feb 13 15:27:50.867869 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:27:50.923186 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:27:50.923645 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:50.927981 sudo[1650]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:50.935236 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:27:50.935661 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:50.960126 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:27:50.993092 augenrules[1672]: No rules Feb 13 15:27:50.994969 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:27:50.995240 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:27:50.996773 sudo[1649]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:50.998254 sshd[1648]: Connection closed by 10.0.0.1 port 35006 Feb 13 15:27:50.998589 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:51.011002 systemd[1]: sshd@7-10.0.0.78:22-10.0.0.1:35006.service: Deactivated successfully. Feb 13 15:27:51.013156 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:27:51.015088 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:27:51.026080 systemd[1]: Started sshd@8-10.0.0.78:22-10.0.0.1:35020.service - OpenSSH per-connection server daemon (10.0.0.1:35020). Feb 13 15:27:51.027273 systemd-logind[1476]: Removed session 8. Feb 13 15:27:51.068614 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 35020 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:27:51.070413 sshd-session[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:27:51.075089 systemd-logind[1476]: New session 9 of user core. Feb 13 15:27:51.085077 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:27:51.139075 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:27:51.139402 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:27:51.165341 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:27:51.188147 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:27:51.188427 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:27:51.677370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:27:51.689047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:51.710789 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:27:51.710901 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:27:51.711189 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:51.728066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:51.747757 systemd[1]: Reloading requested from client PID 1734 ('systemctl') (unit session-9.scope)... Feb 13 15:27:51.747778 systemd[1]: Reloading... Feb 13 15:27:51.842764 zram_generator::config[1775]: No configuration found. Feb 13 15:27:52.231808 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:27:52.313047 systemd[1]: Reloading finished in 564 ms. Feb 13 15:27:52.368483 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:27:52.368578 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:27:52.368926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:52.371796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:27:52.528314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:27:52.535039 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:27:52.588211 kubelet[1821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:52.588211 kubelet[1821]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:27:52.588211 kubelet[1821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:27:52.588674 kubelet[1821]: I0213 15:27:52.588267 1821 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:27:52.878249 kubelet[1821]: I0213 15:27:52.878121 1821 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:27:52.878249 kubelet[1821]: I0213 15:27:52.878164 1821 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:27:52.878441 kubelet[1821]: I0213 15:27:52.878401 1821 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:27:52.894905 kubelet[1821]: I0213 15:27:52.894834 1821 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:27:52.905474 kubelet[1821]: I0213 15:27:52.905438 1821 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:27:52.905805 kubelet[1821]: I0213 15:27:52.905777 1821 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:27:52.905983 kubelet[1821]: I0213 15:27:52.905958 1821 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:27:52.906451 kubelet[1821]: I0213 15:27:52.906421 1821 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:27:52.906451 kubelet[1821]: I0213 15:27:52.906441 1821 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:27:52.907411 kubelet[1821]: I0213 15:27:52.907359 1821 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:52.907520 kubelet[1821]: I0213 15:27:52.907490 1821 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:27:52.907520 kubelet[1821]: I0213 15:27:52.907514 1821 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:27:52.907580 kubelet[1821]: I0213 15:27:52.907557 1821 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:27:52.907580 kubelet[1821]: I0213 15:27:52.907577 1821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:27:52.907789 kubelet[1821]: E0213 15:27:52.907715 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:52.907870 kubelet[1821]: E0213 15:27:52.907777 1821 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:52.912165 kubelet[1821]: I0213 15:27:52.912131 1821 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:27:52.914621 kubelet[1821]: I0213 15:27:52.914603 1821 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:27:52.914683 kubelet[1821]: W0213 15:27:52.914673 1821 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:27:52.915387 kubelet[1821]: I0213 15:27:52.915359 1821 server.go:1256] "Started kubelet" Feb 13 15:27:52.917004 kubelet[1821]: I0213 15:27:52.915420 1821 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:27:52.917004 kubelet[1821]: I0213 15:27:52.915493 1821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:27:52.917004 kubelet[1821]: I0213 15:27:52.915872 1821 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:27:52.917004 kubelet[1821]: I0213 15:27:52.916419 1821 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:27:52.917004 kubelet[1821]: I0213 15:27:52.916820 1821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:27:52.919013 kubelet[1821]: W0213 15:27:52.918995 1821 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:27:52.919082 kubelet[1821]: E0213 15:27:52.919072 1821 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:27:52.919362 kubelet[1821]: W0213 15:27:52.919328 1821 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.78" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:27:52.919404 kubelet[1821]: E0213 15:27:52.919371 1821 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.78" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:27:52.919786 kubelet[1821]: E0213 15:27:52.919728 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:52.919786 kubelet[1821]: I0213 15:27:52.919776 1821 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:27:52.919878 kubelet[1821]: I0213 15:27:52.919859 1821 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:27:52.920086 kubelet[1821]: I0213 15:27:52.920062 1821 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:27:52.921216 kubelet[1821]: E0213 15:27:52.921199 1821 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:27:52.921799 kubelet[1821]: I0213 15:27:52.921772 1821 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:27:52.921955 kubelet[1821]: I0213 15:27:52.921854 1821 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:27:52.922830 kubelet[1821]: I0213 15:27:52.922811 1821 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:27:52.930160 kubelet[1821]: E0213 15:27:52.930104 1821 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.78\" not found" node="10.0.0.78" Feb 13 15:27:52.932236 kubelet[1821]: I0213 15:27:52.932215 1821 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:27:52.932236 kubelet[1821]: I0213 15:27:52.932234 1821 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:27:52.932304 kubelet[1821]: I0213 15:27:52.932276 1821 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:27:53.020985 kubelet[1821]: I0213 15:27:53.020957 1821 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.78" Feb 13 15:27:53.130809 kubelet[1821]: I0213 15:27:53.130637 1821 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.78" Feb 13 15:27:53.159702 kubelet[1821]: I0213 15:27:53.159649 1821 policy_none.go:49] "None policy: Start" Feb 13 15:27:53.160724 kubelet[1821]: I0213 15:27:53.160701 1821 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:27:53.160825 kubelet[1821]: I0213 15:27:53.160730 1821 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:27:53.161424 kubelet[1821]: E0213 15:27:53.161369 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.168377 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:27:53.178180 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:27:53.181263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:27:53.188721 kubelet[1821]: I0213 15:27:53.188695 1821 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:27:53.189176 kubelet[1821]: I0213 15:27:53.189067 1821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:27:53.190685 kubelet[1821]: E0213 15:27:53.190650 1821 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.78\" not found" Feb 13 15:27:53.192374 kubelet[1821]: I0213 15:27:53.192356 1821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:27:53.193914 kubelet[1821]: I0213 15:27:53.193895 1821 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:27:53.193970 kubelet[1821]: I0213 15:27:53.193925 1821 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:27:53.193970 kubelet[1821]: I0213 15:27:53.193941 1821 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:27:53.194080 kubelet[1821]: E0213 15:27:53.194065 1821 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 15:27:53.262077 kubelet[1821]: E0213 15:27:53.262011 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.362769 kubelet[1821]: E0213 15:27:53.362688 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.463529 kubelet[1821]: E0213 15:27:53.463386 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.564257 kubelet[1821]: E0213 15:27:53.564180 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.664808 kubelet[1821]: E0213 15:27:53.664757 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.765435 kubelet[1821]: E0213 15:27:53.765333 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.866097 kubelet[1821]: E0213 15:27:53.866032 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:53.880231 kubelet[1821]: I0213 15:27:53.880195 1821 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:27:53.880413 kubelet[1821]: W0213 15:27:53.880377 1821 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:27:53.880457 kubelet[1821]: W0213 15:27:53.880411 1821 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:27:53.908642 kubelet[1821]: E0213 15:27:53.908603 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:53.967192 kubelet[1821]: E0213 15:27:53.967121 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:54.046051 sudo[1683]: pam_unix(sudo:session): session closed for user root Feb 13 15:27:54.047381 sshd[1682]: Connection closed by 10.0.0.1 port 35020 Feb 13 15:27:54.047769 sshd-session[1680]: pam_unix(sshd:session): session closed for user core Feb 13 15:27:54.051900 systemd[1]: sshd@8-10.0.0.78:22-10.0.0.1:35020.service: Deactivated successfully. Feb 13 15:27:54.053596 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:27:54.054364 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:27:54.055573 systemd-logind[1476]: Removed session 9. Feb 13 15:27:54.067809 kubelet[1821]: E0213 15:27:54.067749 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:54.168172 kubelet[1821]: E0213 15:27:54.168121 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:54.269284 kubelet[1821]: E0213 15:27:54.269244 1821 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.78\" not found" Feb 13 15:27:54.371140 kubelet[1821]: I0213 15:27:54.371006 1821 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:27:54.371904 containerd[1495]: time="2025-02-13T15:27:54.371856017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:27:54.372330 kubelet[1821]: I0213 15:27:54.372097 1821 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:27:54.909041 kubelet[1821]: E0213 15:27:54.908986 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:54.911338 kubelet[1821]: I0213 15:27:54.911312 1821 apiserver.go:52] "Watching apiserver" Feb 13 15:27:54.920260 kubelet[1821]: I0213 15:27:54.920198 1821 topology_manager.go:215] "Topology Admit Handler" podUID="2e48be59-e232-44eb-bddc-76ba1c625a80" podNamespace="calico-system" podName="calico-node-sxsmq" Feb 13 15:27:54.920341 kubelet[1821]: I0213 15:27:54.920319 1821 topology_manager.go:215] "Topology Admit Handler" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" podNamespace="calico-system" podName="csi-node-driver-5c8kb" Feb 13 15:27:54.920386 kubelet[1821]: I0213 15:27:54.920368 1821 topology_manager.go:215] "Topology Admit Handler" podUID="e9dcd0cf-579b-40b0-8d50-870148304e71" podNamespace="kube-system" podName="kube-proxy-4cjsj" Feb 13 15:27:54.920708 kubelet[1821]: E0213 15:27:54.920667 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:27:54.926975 systemd[1]: Created slice kubepods-besteffort-pode9dcd0cf_579b_40b0_8d50_870148304e71.slice - libcontainer container kubepods-besteffort-pode9dcd0cf_579b_40b0_8d50_870148304e71.slice. Feb 13 15:27:54.939466 systemd[1]: Created slice kubepods-besteffort-pod2e48be59_e232_44eb_bddc_76ba1c625a80.slice - libcontainer container kubepods-besteffort-pod2e48be59_e232_44eb_bddc_76ba1c625a80.slice. Feb 13 15:27:55.021024 kubelet[1821]: I0213 15:27:55.020992 1821 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:27:55.030921 kubelet[1821]: I0213 15:27:55.030890 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrcl\" (UniqueName: \"kubernetes.io/projected/2e48be59-e232-44eb-bddc-76ba1c625a80-kube-api-access-lvrcl\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.030985 kubelet[1821]: I0213 15:27:55.030931 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6ec94504-4c12-4803-811c-f4f9cd9226c0-registration-dir\") pod \"csi-node-driver-5c8kb\" (UID: \"6ec94504-4c12-4803-811c-f4f9cd9226c0\") " pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:27:55.030985 kubelet[1821]: I0213 15:27:55.030959 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e9dcd0cf-579b-40b0-8d50-870148304e71-kube-proxy\") pod \"kube-proxy-4cjsj\" (UID: \"e9dcd0cf-579b-40b0-8d50-870148304e71\") " pod="kube-system/kube-proxy-4cjsj" Feb 13 15:27:55.030985 kubelet[1821]: I0213 15:27:55.030977 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-lib-modules\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031043 kubelet[1821]: I0213 15:27:55.031021 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e48be59-e232-44eb-bddc-76ba1c625a80-tigera-ca-bundle\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031064 kubelet[1821]: I0213 15:27:55.031051 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2e48be59-e232-44eb-bddc-76ba1c625a80-node-certs\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031113 kubelet[1821]: I0213 15:27:55.031089 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-var-lib-calico\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031136 kubelet[1821]: I0213 15:27:55.031127 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-cni-bin-dir\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031160 kubelet[1821]: I0213 15:27:55.031153 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-xtables-lock\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031216 kubelet[1821]: I0213 15:27:55.031185 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6ec94504-4c12-4803-811c-f4f9cd9226c0-kubelet-dir\") pod \"csi-node-driver-5c8kb\" (UID: \"6ec94504-4c12-4803-811c-f4f9cd9226c0\") " pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:27:55.031263 kubelet[1821]: I0213 15:27:55.031246 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e9dcd0cf-579b-40b0-8d50-870148304e71-xtables-lock\") pod \"kube-proxy-4cjsj\" (UID: \"e9dcd0cf-579b-40b0-8d50-870148304e71\") " pod="kube-system/kube-proxy-4cjsj" Feb 13 15:27:55.031290 kubelet[1821]: I0213 15:27:55.031279 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zwd\" (UniqueName: \"kubernetes.io/projected/e9dcd0cf-579b-40b0-8d50-870148304e71-kube-api-access-z4zwd\") pod \"kube-proxy-4cjsj\" (UID: \"e9dcd0cf-579b-40b0-8d50-870148304e71\") " pod="kube-system/kube-proxy-4cjsj" Feb 13 15:27:55.031314 kubelet[1821]: I0213 15:27:55.031303 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6ec94504-4c12-4803-811c-f4f9cd9226c0-varrun\") pod \"csi-node-driver-5c8kb\" (UID: \"6ec94504-4c12-4803-811c-f4f9cd9226c0\") " pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:27:55.031351 kubelet[1821]: I0213 15:27:55.031336 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6ec94504-4c12-4803-811c-f4f9cd9226c0-socket-dir\") pod \"csi-node-driver-5c8kb\" (UID: \"6ec94504-4c12-4803-811c-f4f9cd9226c0\") " pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:27:55.031372 kubelet[1821]: I0213 15:27:55.031366 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wt77\" (UniqueName: \"kubernetes.io/projected/6ec94504-4c12-4803-811c-f4f9cd9226c0-kube-api-access-2wt77\") pod \"csi-node-driver-5c8kb\" (UID: \"6ec94504-4c12-4803-811c-f4f9cd9226c0\") " pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:27:55.031400 kubelet[1821]: I0213 15:27:55.031390 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-policysync\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031423 kubelet[1821]: I0213 15:27:55.031414 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-var-run-calico\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031464 kubelet[1821]: I0213 15:27:55.031445 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-cni-net-dir\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031485 kubelet[1821]: I0213 15:27:55.031480 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-cni-log-dir\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031535 kubelet[1821]: I0213 15:27:55.031522 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2e48be59-e232-44eb-bddc-76ba1c625a80-flexvol-driver-host\") pod \"calico-node-sxsmq\" (UID: \"2e48be59-e232-44eb-bddc-76ba1c625a80\") " pod="calico-system/calico-node-sxsmq" Feb 13 15:27:55.031577 kubelet[1821]: I0213 15:27:55.031552 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e9dcd0cf-579b-40b0-8d50-870148304e71-lib-modules\") pod \"kube-proxy-4cjsj\" (UID: \"e9dcd0cf-579b-40b0-8d50-870148304e71\") " pod="kube-system/kube-proxy-4cjsj" Feb 13 15:27:55.133451 kubelet[1821]: E0213 15:27:55.133411 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.133451 kubelet[1821]: W0213 15:27:55.133436 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.133623 kubelet[1821]: E0213 15:27:55.133466 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.133686 kubelet[1821]: E0213 15:27:55.133670 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.133686 kubelet[1821]: W0213 15:27:55.133682 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.133796 kubelet[1821]: E0213 15:27:55.133715 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.133913 kubelet[1821]: E0213 15:27:55.133897 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.133913 kubelet[1821]: W0213 15:27:55.133909 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.133969 kubelet[1821]: E0213 15:27:55.133949 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.134166 kubelet[1821]: E0213 15:27:55.134153 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.134192 kubelet[1821]: W0213 15:27:55.134165 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.134236 kubelet[1821]: E0213 15:27:55.134195 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.134403 kubelet[1821]: E0213 15:27:55.134387 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.134403 kubelet[1821]: W0213 15:27:55.134399 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.134495 kubelet[1821]: E0213 15:27:55.134432 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.134609 kubelet[1821]: E0213 15:27:55.134595 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.134609 kubelet[1821]: W0213 15:27:55.134606 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.134701 kubelet[1821]: E0213 15:27:55.134634 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.134836 kubelet[1821]: E0213 15:27:55.134809 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.134836 kubelet[1821]: W0213 15:27:55.134820 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.134836 kubelet[1821]: E0213 15:27:55.134837 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.135678 kubelet[1821]: E0213 15:27:55.135659 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.135678 kubelet[1821]: W0213 15:27:55.135674 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.135764 kubelet[1821]: E0213 15:27:55.135694 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.135937 kubelet[1821]: E0213 15:27:55.135924 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.135937 kubelet[1821]: W0213 15:27:55.135937 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.136002 kubelet[1821]: E0213 15:27:55.135956 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.136188 kubelet[1821]: E0213 15:27:55.136169 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.136188 kubelet[1821]: W0213 15:27:55.136181 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.136340 kubelet[1821]: E0213 15:27:55.136278 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.136492 kubelet[1821]: E0213 15:27:55.136478 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.137089 kubelet[1821]: W0213 15:27:55.136559 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.137089 kubelet[1821]: E0213 15:27:55.136952 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.137089 kubelet[1821]: W0213 15:27:55.136961 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.137169 kubelet[1821]: E0213 15:27:55.137151 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.137169 kubelet[1821]: E0213 15:27:55.137168 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.139478 kubelet[1821]: E0213 15:27:55.139459 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.139478 kubelet[1821]: W0213 15:27:55.139475 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.139561 kubelet[1821]: E0213 15:27:55.139490 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.139950 kubelet[1821]: E0213 15:27:55.139935 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.140006 kubelet[1821]: W0213 15:27:55.139950 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.140006 kubelet[1821]: E0213 15:27:55.139965 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.140194 kubelet[1821]: E0213 15:27:55.140182 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.140194 kubelet[1821]: W0213 15:27:55.140192 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.140241 kubelet[1821]: E0213 15:27:55.140204 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.140915 kubelet[1821]: E0213 15:27:55.140903 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.140915 kubelet[1821]: W0213 15:27:55.140913 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.140971 kubelet[1821]: E0213 15:27:55.140924 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.204478 kubelet[1821]: E0213 15:27:55.204398 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.204478 kubelet[1821]: W0213 15:27:55.204413 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.204478 kubelet[1821]: E0213 15:27:55.204436 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.205226 kubelet[1821]: E0213 15:27:55.204630 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.205226 kubelet[1821]: W0213 15:27:55.204640 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.205226 kubelet[1821]: E0213 15:27:55.204650 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.205226 kubelet[1821]: E0213 15:27:55.204834 1821 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 15:27:55.205226 kubelet[1821]: W0213 15:27:55.204841 1821 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 15:27:55.205226 kubelet[1821]: E0213 15:27:55.204850 1821 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 15:27:55.237871 kubelet[1821]: E0213 15:27:55.237856 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.238586 containerd[1495]: time="2025-02-13T15:27:55.238534148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4cjsj,Uid:e9dcd0cf-579b-40b0-8d50-870148304e71,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:55.241592 kubelet[1821]: E0213 15:27:55.241574 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:55.242049 containerd[1495]: time="2025-02-13T15:27:55.241851389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sxsmq,Uid:2e48be59-e232-44eb-bddc-76ba1c625a80,Namespace:calico-system,Attempt:0,}" Feb 13 15:27:55.909879 kubelet[1821]: E0213 15:27:55.909832 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:56.441023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316621272.mount: Deactivated successfully. Feb 13 15:27:56.449675 containerd[1495]: time="2025-02-13T15:27:56.449643300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:56.451589 containerd[1495]: time="2025-02-13T15:27:56.451527724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:27:56.452750 containerd[1495]: time="2025-02-13T15:27:56.452704261Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:56.453841 containerd[1495]: time="2025-02-13T15:27:56.453806087Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:56.456582 containerd[1495]: time="2025-02-13T15:27:56.456511771Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:27:56.458708 containerd[1495]: time="2025-02-13T15:27:56.458677923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:27:56.460239 containerd[1495]: time="2025-02-13T15:27:56.460212942Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.2182584s" Feb 13 15:27:56.461001 containerd[1495]: time="2025-02-13T15:27:56.460977526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.222307302s" Feb 13 15:27:56.624726 containerd[1495]: time="2025-02-13T15:27:56.624625534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:56.624726 containerd[1495]: time="2025-02-13T15:27:56.624671641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:56.624726 containerd[1495]: time="2025-02-13T15:27:56.624684896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.625005 containerd[1495]: time="2025-02-13T15:27:56.624782719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.632115 containerd[1495]: time="2025-02-13T15:27:56.631795721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:56.632115 containerd[1495]: time="2025-02-13T15:27:56.631853630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:56.632115 containerd[1495]: time="2025-02-13T15:27:56.631872425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.632115 containerd[1495]: time="2025-02-13T15:27:56.631942797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:56.754938 systemd[1]: Started cri-containerd-8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce.scope - libcontainer container 8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce. Feb 13 15:27:56.759051 systemd[1]: Started cri-containerd-3898ecb4448aafee58daeda57ea88bcb0e602c079d2ee8eddccc6a6fec1c9a33.scope - libcontainer container 3898ecb4448aafee58daeda57ea88bcb0e602c079d2ee8eddccc6a6fec1c9a33. Feb 13 15:27:56.785475 containerd[1495]: time="2025-02-13T15:27:56.785429058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sxsmq,Uid:2e48be59-e232-44eb-bddc-76ba1c625a80,Namespace:calico-system,Attempt:0,} returns sandbox id \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\"" Feb 13 15:27:56.786403 kubelet[1821]: E0213 15:27:56.786347 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.788812 containerd[1495]: time="2025-02-13T15:27:56.788378770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 15:27:56.792819 containerd[1495]: time="2025-02-13T15:27:56.792774925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4cjsj,Uid:e9dcd0cf-579b-40b0-8d50-870148304e71,Namespace:kube-system,Attempt:0,} returns sandbox id \"3898ecb4448aafee58daeda57ea88bcb0e602c079d2ee8eddccc6a6fec1c9a33\"" Feb 13 15:27:56.793625 kubelet[1821]: E0213 15:27:56.793586 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:56.910365 kubelet[1821]: E0213 15:27:56.910287 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:57.195189 kubelet[1821]: E0213 15:27:57.194999 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:27:57.910510 kubelet[1821]: E0213 15:27:57.910463 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:58.428411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1991100046.mount: Deactivated successfully. Feb 13 15:27:58.512151 containerd[1495]: time="2025-02-13T15:27:58.512066310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.512937 containerd[1495]: time="2025-02-13T15:27:58.512868995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 15:27:58.514356 containerd[1495]: time="2025-02-13T15:27:58.514323232Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.516786 containerd[1495]: time="2025-02-13T15:27:58.516733693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:58.517309 containerd[1495]: time="2025-02-13T15:27:58.517284516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.728868104s" Feb 13 15:27:58.517347 containerd[1495]: time="2025-02-13T15:27:58.517313039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 15:27:58.518090 containerd[1495]: time="2025-02-13T15:27:58.518047376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:27:58.519389 containerd[1495]: time="2025-02-13T15:27:58.519361030Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 15:27:58.537632 containerd[1495]: time="2025-02-13T15:27:58.537563255Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea\"" Feb 13 15:27:58.538345 containerd[1495]: time="2025-02-13T15:27:58.538306539Z" level=info msg="StartContainer for \"5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea\"" Feb 13 15:27:58.575941 systemd[1]: Started cri-containerd-5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea.scope - libcontainer container 5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea. Feb 13 15:27:58.614665 containerd[1495]: time="2025-02-13T15:27:58.614621016Z" level=info msg="StartContainer for \"5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea\" returns successfully" Feb 13 15:27:58.639996 systemd[1]: cri-containerd-5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea.scope: Deactivated successfully. Feb 13 15:27:58.678518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea-rootfs.mount: Deactivated successfully. Feb 13 15:27:58.759339 containerd[1495]: time="2025-02-13T15:27:58.759271077Z" level=info msg="shim disconnected" id=5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea namespace=k8s.io Feb 13 15:27:58.759339 containerd[1495]: time="2025-02-13T15:27:58.759323595Z" level=warning msg="cleaning up after shim disconnected" id=5cf6933221a63bab32e80f5dc49f2d39d4acba2272129b0029eaa80e902905ea namespace=k8s.io Feb 13 15:27:58.759339 containerd[1495]: time="2025-02-13T15:27:58.759331630Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:58.911448 kubelet[1821]: E0213 15:27:58.911378 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:59.195235 kubelet[1821]: E0213 15:27:59.195162 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:27:59.206568 kubelet[1821]: E0213 15:27:59.206549 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:59.912272 kubelet[1821]: E0213 15:27:59.912230 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:59.997118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161812139.mount: Deactivated successfully. Feb 13 15:28:00.417857 containerd[1495]: time="2025-02-13T15:28:00.417808867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.420775 containerd[1495]: time="2025-02-13T15:28:00.420729855Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=28620592" Feb 13 15:28:00.422316 containerd[1495]: time="2025-02-13T15:28:00.422284300Z" level=info msg="ImageCreate event name:\"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.424981 containerd[1495]: time="2025-02-13T15:28:00.424942185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:00.425680 containerd[1495]: time="2025-02-13T15:28:00.425639011Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"28619611\" in 1.907548004s" Feb 13 15:28:00.425680 containerd[1495]: time="2025-02-13T15:28:00.425676412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:609f2866f1e52a5f0d2651e1206db6aeb38e8c3f91175abcfaf7e87381e5cce2\"" Feb 13 15:28:00.426145 containerd[1495]: time="2025-02-13T15:28:00.426123991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 15:28:00.427374 containerd[1495]: time="2025-02-13T15:28:00.427337386Z" level=info msg="CreateContainer within sandbox \"3898ecb4448aafee58daeda57ea88bcb0e602c079d2ee8eddccc6a6fec1c9a33\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:28:00.442754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount456533727.mount: Deactivated successfully. Feb 13 15:28:00.449554 containerd[1495]: time="2025-02-13T15:28:00.449499538Z" level=info msg="CreateContainer within sandbox \"3898ecb4448aafee58daeda57ea88bcb0e602c079d2ee8eddccc6a6fec1c9a33\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7e2775e171d9cc81a7a790712bec13d47eb9514525f484ab35d9e9a7d279575\"" Feb 13 15:28:00.450088 containerd[1495]: time="2025-02-13T15:28:00.450049941Z" level=info msg="StartContainer for \"f7e2775e171d9cc81a7a790712bec13d47eb9514525f484ab35d9e9a7d279575\"" Feb 13 15:28:00.490909 systemd[1]: Started cri-containerd-f7e2775e171d9cc81a7a790712bec13d47eb9514525f484ab35d9e9a7d279575.scope - libcontainer container f7e2775e171d9cc81a7a790712bec13d47eb9514525f484ab35d9e9a7d279575. Feb 13 15:28:00.533237 containerd[1495]: time="2025-02-13T15:28:00.533194597Z" level=info msg="StartContainer for \"f7e2775e171d9cc81a7a790712bec13d47eb9514525f484ab35d9e9a7d279575\" returns successfully" Feb 13 15:28:00.913361 kubelet[1821]: E0213 15:28:00.913245 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:01.194374 kubelet[1821]: E0213 15:28:01.194234 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:01.210072 kubelet[1821]: E0213 15:28:01.210041 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:01.229989 kubelet[1821]: I0213 15:28:01.229934 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4cjsj" podStartSLOduration=4.597930358 podStartE2EDuration="8.229884849s" podCreationTimestamp="2025-02-13 15:27:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:56.794014579 +0000 UTC m=+4.254136094" lastFinishedPulling="2025-02-13 15:28:00.42596907 +0000 UTC m=+7.886090585" observedRunningTime="2025-02-13 15:28:01.229578285 +0000 UTC m=+8.689699800" watchObservedRunningTime="2025-02-13 15:28:01.229884849 +0000 UTC m=+8.690006375" Feb 13 15:28:01.913945 kubelet[1821]: E0213 15:28:01.913898 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:02.211228 kubelet[1821]: E0213 15:28:02.211141 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:02.914854 kubelet[1821]: E0213 15:28:02.914781 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:03.194387 kubelet[1821]: E0213 15:28:03.194258 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:03.914957 kubelet[1821]: E0213 15:28:03.914910 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:04.915850 kubelet[1821]: E0213 15:28:04.915760 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:05.194753 kubelet[1821]: E0213 15:28:05.194615 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:05.262612 containerd[1495]: time="2025-02-13T15:28:05.262565255Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.263406 containerd[1495]: time="2025-02-13T15:28:05.263360706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 15:28:05.403376 containerd[1495]: time="2025-02-13T15:28:05.403307837Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.541309 containerd[1495]: time="2025-02-13T15:28:05.541139501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:05.541958 containerd[1495]: time="2025-02-13T15:28:05.541907120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.115754966s" Feb 13 15:28:05.541958 containerd[1495]: time="2025-02-13T15:28:05.541948809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 15:28:05.543869 containerd[1495]: time="2025-02-13T15:28:05.543843181Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 15:28:05.562825 containerd[1495]: time="2025-02-13T15:28:05.562781898Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373\"" Feb 13 15:28:05.563341 containerd[1495]: time="2025-02-13T15:28:05.563246059Z" level=info msg="StartContainer for \"0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373\"" Feb 13 15:28:05.595871 systemd[1]: Started cri-containerd-0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373.scope - libcontainer container 0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373. Feb 13 15:28:05.625867 containerd[1495]: time="2025-02-13T15:28:05.625815509Z" level=info msg="StartContainer for \"0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373\" returns successfully" Feb 13 15:28:05.916657 kubelet[1821]: E0213 15:28:05.916505 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:06.216303 kubelet[1821]: E0213 15:28:06.216188 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:06.917416 kubelet[1821]: E0213 15:28:06.917359 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:07.195351 kubelet[1821]: E0213 15:28:07.195209 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:07.217811 kubelet[1821]: E0213 15:28:07.217723 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:07.521385 systemd[1]: cri-containerd-0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373.scope: Deactivated successfully. Feb 13 15:28:07.535988 kubelet[1821]: I0213 15:28:07.535964 1821 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:28:07.544567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373-rootfs.mount: Deactivated successfully. Feb 13 15:28:07.918362 kubelet[1821]: E0213 15:28:07.918227 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:08.296638 containerd[1495]: time="2025-02-13T15:28:08.296460211Z" level=info msg="shim disconnected" id=0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373 namespace=k8s.io Feb 13 15:28:08.296638 containerd[1495]: time="2025-02-13T15:28:08.296554948Z" level=warning msg="cleaning up after shim disconnected" id=0c47553a00536a08b43d4a807221a38b91ea7ad11a732ff43518336e67609373 namespace=k8s.io Feb 13 15:28:08.296638 containerd[1495]: time="2025-02-13T15:28:08.296564877Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:28:08.918630 kubelet[1821]: E0213 15:28:08.918551 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:09.199977 systemd[1]: Created slice kubepods-besteffort-pod6ec94504_4c12_4803_811c_f4f9cd9226c0.slice - libcontainer container kubepods-besteffort-pod6ec94504_4c12_4803_811c_f4f9cd9226c0.slice. Feb 13 15:28:09.202134 containerd[1495]: time="2025-02-13T15:28:09.202103044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:0,}" Feb 13 15:28:09.221952 kubelet[1821]: E0213 15:28:09.221906 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:09.223028 containerd[1495]: time="2025-02-13T15:28:09.222936274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 15:28:09.269437 containerd[1495]: time="2025-02-13T15:28:09.269373009Z" level=error msg="Failed to destroy network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:09.269849 containerd[1495]: time="2025-02-13T15:28:09.269827582Z" level=error msg="encountered an error cleaning up failed sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:09.269915 containerd[1495]: time="2025-02-13T15:28:09.269896301Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:09.270203 kubelet[1821]: E0213 15:28:09.270174 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:09.270271 kubelet[1821]: E0213 15:28:09.270237 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:09.270271 kubelet[1821]: E0213 15:28:09.270259 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:09.270380 kubelet[1821]: E0213 15:28:09.270316 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:09.271016 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9-shm.mount: Deactivated successfully. Feb 13 15:28:09.918982 kubelet[1821]: E0213 15:28:09.918910 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:10.224725 kubelet[1821]: I0213 15:28:10.224591 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9" Feb 13 15:28:10.225259 containerd[1495]: time="2025-02-13T15:28:10.225221707Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:10.225604 containerd[1495]: time="2025-02-13T15:28:10.225482807Z" level=info msg="Ensure that sandbox 88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9 in task-service has been cleanup successfully" Feb 13 15:28:10.226443 containerd[1495]: time="2025-02-13T15:28:10.226368979Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:10.226443 containerd[1495]: time="2025-02-13T15:28:10.226412230Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:10.227037 containerd[1495]: time="2025-02-13T15:28:10.227005753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:1,}" Feb 13 15:28:10.227504 systemd[1]: run-netns-cni\x2d66b0e3db\x2d3557\x2dbc92\x2dbb0b\x2d56e8de75e89f.mount: Deactivated successfully. Feb 13 15:28:10.423013 kubelet[1821]: I0213 15:28:10.422634 1821 topology_manager.go:215] "Topology Admit Handler" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" podNamespace="default" podName="nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:10.436854 systemd[1]: Created slice kubepods-besteffort-pod7bf5ff02_8e42_4454_9542_829060b4d158.slice - libcontainer container kubepods-besteffort-pod7bf5ff02_8e42_4454_9542_829060b4d158.slice. Feb 13 15:28:10.480053 containerd[1495]: time="2025-02-13T15:28:10.479891243Z" level=error msg="Failed to destroy network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:10.480377 containerd[1495]: time="2025-02-13T15:28:10.480342369Z" level=error msg="encountered an error cleaning up failed sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:10.480449 containerd[1495]: time="2025-02-13T15:28:10.480424273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:10.480853 kubelet[1821]: E0213 15:28:10.480708 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:10.480853 kubelet[1821]: E0213 15:28:10.480791 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:10.480853 kubelet[1821]: E0213 15:28:10.480824 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:10.480957 kubelet[1821]: E0213 15:28:10.480890 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:10.481905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e-shm.mount: Deactivated successfully. Feb 13 15:28:10.616953 kubelet[1821]: I0213 15:28:10.616874 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2d8r\" (UniqueName: \"kubernetes.io/projected/7bf5ff02-8e42-4454-9542-829060b4d158-kube-api-access-v2d8r\") pod \"nginx-deployment-6d5f899847-qqssm\" (UID: \"7bf5ff02-8e42-4454-9542-829060b4d158\") " pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:10.919614 kubelet[1821]: E0213 15:28:10.919487 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:11.040686 containerd[1495]: time="2025-02-13T15:28:11.040629289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:0,}" Feb 13 15:28:11.111278 containerd[1495]: time="2025-02-13T15:28:11.111235033Z" level=error msg="Failed to destroy network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.111639 containerd[1495]: time="2025-02-13T15:28:11.111607238Z" level=error msg="encountered an error cleaning up failed sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.111697 containerd[1495]: time="2025-02-13T15:28:11.111667683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.111959 kubelet[1821]: E0213 15:28:11.111922 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.111959 kubelet[1821]: E0213 15:28:11.111978 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:11.112170 kubelet[1821]: E0213 15:28:11.111998 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:11.112170 kubelet[1821]: E0213 15:28:11.112054 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:11.228355 kubelet[1821]: I0213 15:28:11.227539 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e" Feb 13 15:28:11.228475 containerd[1495]: time="2025-02-13T15:28:11.228123257Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:11.228475 containerd[1495]: time="2025-02-13T15:28:11.228385781Z" level=info msg="Ensure that sandbox 783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e in task-service has been cleanup successfully" Feb 13 15:28:11.229024 containerd[1495]: time="2025-02-13T15:28:11.228993118Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:11.229024 containerd[1495]: time="2025-02-13T15:28:11.229018216Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:11.229532 kubelet[1821]: I0213 15:28:11.229511 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b" Feb 13 15:28:11.230261 containerd[1495]: time="2025-02-13T15:28:11.229577361Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:11.230261 containerd[1495]: time="2025-02-13T15:28:11.229875713Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:11.230261 containerd[1495]: time="2025-02-13T15:28:11.229912284Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:11.230261 containerd[1495]: time="2025-02-13T15:28:11.229927764Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:11.230261 containerd[1495]: time="2025-02-13T15:28:11.230110134Z" level=info msg="Ensure that sandbox f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b in task-service has been cleanup successfully" Feb 13 15:28:11.230565 containerd[1495]: time="2025-02-13T15:28:11.230508419Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:11.230565 containerd[1495]: time="2025-02-13T15:28:11.230537214Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:11.230768 systemd[1]: run-netns-cni\x2d1db9ba1d\x2d13cb\x2dee88\x2dc326\x2d67b534991d5f.mount: Deactivated successfully. Feb 13 15:28:11.232213 containerd[1495]: time="2025-02-13T15:28:11.232171103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:1,}" Feb 13 15:28:11.232279 containerd[1495]: time="2025-02-13T15:28:11.232210649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:2,}" Feb 13 15:28:11.232441 systemd[1]: run-netns-cni\x2d859dbf4f\x2d80c0\x2dd301\x2df6a9\x2d75a1fce0d87d.mount: Deactivated successfully. Feb 13 15:28:11.327972 containerd[1495]: time="2025-02-13T15:28:11.327883372Z" level=error msg="Failed to destroy network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.331020 containerd[1495]: time="2025-02-13T15:28:11.330488426Z" level=error msg="encountered an error cleaning up failed sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.331020 containerd[1495]: time="2025-02-13T15:28:11.330564863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.332158 kubelet[1821]: E0213 15:28:11.332126 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.332437 kubelet[1821]: E0213 15:28:11.332397 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:11.332672 kubelet[1821]: E0213 15:28:11.332655 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:11.334219 containerd[1495]: time="2025-02-13T15:28:11.334165389Z" level=error msg="Failed to destroy network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.334572 kubelet[1821]: E0213 15:28:11.334544 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:11.335103 containerd[1495]: time="2025-02-13T15:28:11.335070257Z" level=error msg="encountered an error cleaning up failed sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.335173 containerd[1495]: time="2025-02-13T15:28:11.335143086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.335354 kubelet[1821]: E0213 15:28:11.335316 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:11.335354 kubelet[1821]: E0213 15:28:11.335353 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:11.335583 kubelet[1821]: E0213 15:28:11.335378 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:11.335583 kubelet[1821]: E0213 15:28:11.335447 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:11.920258 kubelet[1821]: E0213 15:28:11.920189 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:12.227755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510-shm.mount: Deactivated successfully. Feb 13 15:28:12.232589 kubelet[1821]: I0213 15:28:12.232568 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c" Feb 13 15:28:12.233210 containerd[1495]: time="2025-02-13T15:28:12.233172628Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:12.233510 containerd[1495]: time="2025-02-13T15:28:12.233379143Z" level=info msg="Ensure that sandbox 6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c in task-service has been cleanup successfully" Feb 13 15:28:12.233702 containerd[1495]: time="2025-02-13T15:28:12.233636658Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:12.233702 containerd[1495]: time="2025-02-13T15:28:12.233653540Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:12.233973 containerd[1495]: time="2025-02-13T15:28:12.233939368Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:12.234080 containerd[1495]: time="2025-02-13T15:28:12.234053727Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:12.234080 containerd[1495]: time="2025-02-13T15:28:12.234074257Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:12.234280 kubelet[1821]: I0213 15:28:12.234246 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510" Feb 13 15:28:12.234567 containerd[1495]: time="2025-02-13T15:28:12.234549679Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:12.234947 containerd[1495]: time="2025-02-13T15:28:12.234700638Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:12.234947 containerd[1495]: time="2025-02-13T15:28:12.234713523Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:12.234947 containerd[1495]: time="2025-02-13T15:28:12.234763409Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:12.235043 systemd[1]: run-netns-cni\x2d1276c308\x2dbb3a\x2d8a3b\x2d2f81\x2d09b9e6643a22.mount: Deactivated successfully. Feb 13 15:28:12.235245 containerd[1495]: time="2025-02-13T15:28:12.235093622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:3,}" Feb 13 15:28:12.235455 containerd[1495]: time="2025-02-13T15:28:12.235415530Z" level=info msg="Ensure that sandbox 9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510 in task-service has been cleanup successfully" Feb 13 15:28:12.235606 containerd[1495]: time="2025-02-13T15:28:12.235579735Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:12.235606 containerd[1495]: time="2025-02-13T15:28:12.235597378Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:12.235930 containerd[1495]: time="2025-02-13T15:28:12.235906922Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:12.236018 containerd[1495]: time="2025-02-13T15:28:12.236001824Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:12.236111 containerd[1495]: time="2025-02-13T15:28:12.236017243Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:12.236453 containerd[1495]: time="2025-02-13T15:28:12.236424034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:2,}" Feb 13 15:28:12.236979 systemd[1]: run-netns-cni\x2d370928d7\x2d2d7b\x2def2b\x2d464c\x2dc0f6fd983f20.mount: Deactivated successfully. Feb 13 15:28:12.672498 containerd[1495]: time="2025-02-13T15:28:12.671520453Z" level=error msg="Failed to destroy network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.673212 containerd[1495]: time="2025-02-13T15:28:12.672899899Z" level=error msg="encountered an error cleaning up failed sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.673212 containerd[1495]: time="2025-02-13T15:28:12.672969933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.673780 kubelet[1821]: E0213 15:28:12.673562 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.673780 kubelet[1821]: E0213 15:28:12.673627 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:12.673780 kubelet[1821]: E0213 15:28:12.673655 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:12.673903 kubelet[1821]: E0213 15:28:12.673726 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:12.681336 containerd[1495]: time="2025-02-13T15:28:12.681305942Z" level=error msg="Failed to destroy network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.681956 containerd[1495]: time="2025-02-13T15:28:12.681895983Z" level=error msg="encountered an error cleaning up failed sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.682098 containerd[1495]: time="2025-02-13T15:28:12.681965276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.682221 kubelet[1821]: E0213 15:28:12.682195 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:12.682269 kubelet[1821]: E0213 15:28:12.682254 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:12.682295 kubelet[1821]: E0213 15:28:12.682280 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:12.682351 kubelet[1821]: E0213 15:28:12.682338 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:12.907782 kubelet[1821]: E0213 15:28:12.907713 1821 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:12.921103 kubelet[1821]: E0213 15:28:12.920993 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:13.229531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529-shm.mount: Deactivated successfully. Feb 13 15:28:13.239694 kubelet[1821]: I0213 15:28:13.237716 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529" Feb 13 15:28:13.239854 containerd[1495]: time="2025-02-13T15:28:13.238593232Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:13.239854 containerd[1495]: time="2025-02-13T15:28:13.238894789Z" level=info msg="Ensure that sandbox 7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529 in task-service has been cleanup successfully" Feb 13 15:28:13.241409 systemd[1]: run-netns-cni\x2d94162193\x2d1dc4\x2db19e\x2d68a3\x2d3522cc51b2aa.mount: Deactivated successfully. Feb 13 15:28:13.242127 kubelet[1821]: I0213 15:28:13.241673 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a" Feb 13 15:28:13.242166 containerd[1495]: time="2025-02-13T15:28:13.242114857Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:13.242395 containerd[1495]: time="2025-02-13T15:28:13.242373353Z" level=info msg="Ensure that sandbox 2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a in task-service has been cleanup successfully" Feb 13 15:28:13.245622 containerd[1495]: time="2025-02-13T15:28:13.245493369Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:13.245622 containerd[1495]: time="2025-02-13T15:28:13.245527363Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:13.245731 containerd[1495]: time="2025-02-13T15:28:13.245683893Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:13.245731 containerd[1495]: time="2025-02-13T15:28:13.245700265Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246093037Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246178130Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246197437Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246210531Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246267992Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246280736Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246590801Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246638191Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246698296Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246723484Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246750967Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:13.246754 containerd[1495]: time="2025-02-13T15:28:13.246734685Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:13.248237 containerd[1495]: time="2025-02-13T15:28:13.247323714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:3,}" Feb 13 15:28:13.248237 containerd[1495]: time="2025-02-13T15:28:13.247432422Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:13.248237 containerd[1495]: time="2025-02-13T15:28:13.247769999Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:13.248237 containerd[1495]: time="2025-02-13T15:28:13.247788945Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:13.247519 systemd[1]: run-netns-cni\x2dffc739af\x2d4e8b\x2d6404\x2dfe81\x2d1425bc37fa34.mount: Deactivated successfully. Feb 13 15:28:13.248610 containerd[1495]: time="2025-02-13T15:28:13.248577756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:4,}" Feb 13 15:28:13.499010 containerd[1495]: time="2025-02-13T15:28:13.498829384Z" level=error msg="Failed to destroy network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.499352 containerd[1495]: time="2025-02-13T15:28:13.499303222Z" level=error msg="encountered an error cleaning up failed sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.500846 containerd[1495]: time="2025-02-13T15:28:13.500802383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.501146 kubelet[1821]: E0213 15:28:13.501109 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.501285 kubelet[1821]: E0213 15:28:13.501184 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:13.501285 kubelet[1821]: E0213 15:28:13.501212 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:13.501572 kubelet[1821]: E0213 15:28:13.501290 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:13.522870 containerd[1495]: time="2025-02-13T15:28:13.522795327Z" level=error msg="Failed to destroy network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.526580 containerd[1495]: time="2025-02-13T15:28:13.526524760Z" level=error msg="encountered an error cleaning up failed sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.526657 containerd[1495]: time="2025-02-13T15:28:13.526609523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.526955 kubelet[1821]: E0213 15:28:13.526920 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:13.527093 kubelet[1821]: E0213 15:28:13.526990 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:13.527093 kubelet[1821]: E0213 15:28:13.527019 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:13.527093 kubelet[1821]: E0213 15:28:13.527085 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:13.924718 kubelet[1821]: E0213 15:28:13.924575 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:14.256204 kubelet[1821]: I0213 15:28:14.251690 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8" Feb 13 15:28:14.256337 containerd[1495]: time="2025-02-13T15:28:14.254205346Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:14.256337 containerd[1495]: time="2025-02-13T15:28:14.254445956Z" level=info msg="Ensure that sandbox b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8 in task-service has been cleanup successfully" Feb 13 15:28:14.244949 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3-shm.mount: Deactivated successfully. Feb 13 15:28:14.261322 containerd[1495]: time="2025-02-13T15:28:14.256918716Z" level=info msg="TearDown network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" successfully" Feb 13 15:28:14.261322 containerd[1495]: time="2025-02-13T15:28:14.256951829Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" returns successfully" Feb 13 15:28:14.259748 systemd[1]: run-netns-cni\x2d83dfbd00\x2d27cf\x2d70ba\x2d56fa\x2d00d26fe1f295.mount: Deactivated successfully. Feb 13 15:28:14.261704 containerd[1495]: time="2025-02-13T15:28:14.261653373Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:14.261876 containerd[1495]: time="2025-02-13T15:28:14.261789564Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:14.261876 containerd[1495]: time="2025-02-13T15:28:14.261811977Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:14.262914 containerd[1495]: time="2025-02-13T15:28:14.262878246Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:14.263256 containerd[1495]: time="2025-02-13T15:28:14.263160846Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:14.263256 containerd[1495]: time="2025-02-13T15:28:14.263179372Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:14.263622 containerd[1495]: time="2025-02-13T15:28:14.263580840Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:14.263712 containerd[1495]: time="2025-02-13T15:28:14.263681533Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:14.263712 containerd[1495]: time="2025-02-13T15:28:14.263700609Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:14.264709 containerd[1495]: time="2025-02-13T15:28:14.264304724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:4,}" Feb 13 15:28:14.264911 kubelet[1821]: I0213 15:28:14.264872 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3" Feb 13 15:28:14.265609 containerd[1495]: time="2025-02-13T15:28:14.265542032Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:14.265832 containerd[1495]: time="2025-02-13T15:28:14.265800605Z" level=info msg="Ensure that sandbox 49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3 in task-service has been cleanup successfully" Feb 13 15:28:14.266611 containerd[1495]: time="2025-02-13T15:28:14.266579516Z" level=info msg="TearDown network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" successfully" Feb 13 15:28:14.266611 containerd[1495]: time="2025-02-13T15:28:14.266602730Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" returns successfully" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.270221903Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.270365939Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.270432486Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.271318120Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.271410537Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:14.271557 containerd[1495]: time="2025-02-13T15:28:14.271423631Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:14.270509 systemd[1]: run-netns-cni\x2d34e4eb7a\x2d225b\x2d2b59\x2d6706\x2deaf57b51b7ab.mount: Deactivated successfully. Feb 13 15:28:14.273497 containerd[1495]: time="2025-02-13T15:28:14.273451391Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:14.273615 containerd[1495]: time="2025-02-13T15:28:14.273580667Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:14.273615 containerd[1495]: time="2025-02-13T15:28:14.273601768Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:14.273990 containerd[1495]: time="2025-02-13T15:28:14.273955926Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:14.274610 containerd[1495]: time="2025-02-13T15:28:14.274051168Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:14.274610 containerd[1495]: time="2025-02-13T15:28:14.274069182Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:14.275473 containerd[1495]: time="2025-02-13T15:28:14.275410317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:5,}" Feb 13 15:28:14.495681 containerd[1495]: time="2025-02-13T15:28:14.495616096Z" level=error msg="Failed to destroy network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.496089 containerd[1495]: time="2025-02-13T15:28:14.496052210Z" level=error msg="encountered an error cleaning up failed sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.496158 containerd[1495]: time="2025-02-13T15:28:14.496129137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.496483 kubelet[1821]: E0213 15:28:14.496431 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.496553 kubelet[1821]: E0213 15:28:14.496512 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:14.496553 kubelet[1821]: E0213 15:28:14.496541 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:14.496626 kubelet[1821]: E0213 15:28:14.496608 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:14.553929 containerd[1495]: time="2025-02-13T15:28:14.552989964Z" level=error msg="Failed to destroy network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.553929 containerd[1495]: time="2025-02-13T15:28:14.553412622Z" level=error msg="encountered an error cleaning up failed sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.553929 containerd[1495]: time="2025-02-13T15:28:14.553490311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.554119 kubelet[1821]: E0213 15:28:14.553881 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:14.554119 kubelet[1821]: E0213 15:28:14.553960 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:14.554119 kubelet[1821]: E0213 15:28:14.553986 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:14.554229 kubelet[1821]: E0213 15:28:14.554070 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:14.925444 kubelet[1821]: E0213 15:28:14.925287 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:15.227907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4-shm.mount: Deactivated successfully. Feb 13 15:28:15.228034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62-shm.mount: Deactivated successfully. Feb 13 15:28:15.269619 kubelet[1821]: I0213 15:28:15.269585 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4" Feb 13 15:28:15.270184 containerd[1495]: time="2025-02-13T15:28:15.270149899Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" Feb 13 15:28:15.270514 containerd[1495]: time="2025-02-13T15:28:15.270438922Z" level=info msg="Ensure that sandbox ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4 in task-service has been cleanup successfully" Feb 13 15:28:15.271695 kubelet[1821]: I0213 15:28:15.271666 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62" Feb 13 15:28:15.272072 containerd[1495]: time="2025-02-13T15:28:15.272040181Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" Feb 13 15:28:15.272304 containerd[1495]: time="2025-02-13T15:28:15.272271383Z" level=info msg="Ensure that sandbox 90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62 in task-service has been cleanup successfully" Feb 13 15:28:15.272504 containerd[1495]: time="2025-02-13T15:28:15.272465794Z" level=info msg="TearDown network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" successfully" Feb 13 15:28:15.272540 containerd[1495]: time="2025-02-13T15:28:15.272502344Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" returns successfully" Feb 13 15:28:15.272789 containerd[1495]: time="2025-02-13T15:28:15.272731031Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:15.272924 containerd[1495]: time="2025-02-13T15:28:15.272847283Z" level=info msg="TearDown network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" successfully" Feb 13 15:28:15.272924 containerd[1495]: time="2025-02-13T15:28:15.272859026Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" returns successfully" Feb 13 15:28:15.273123 containerd[1495]: time="2025-02-13T15:28:15.273088254Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:15.273212 containerd[1495]: time="2025-02-13T15:28:15.273187112Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:15.273212 containerd[1495]: time="2025-02-13T15:28:15.273206239Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:15.273426 containerd[1495]: time="2025-02-13T15:28:15.273405730Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:15.273511 containerd[1495]: time="2025-02-13T15:28:15.273495622Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:15.273511 containerd[1495]: time="2025-02-13T15:28:15.273508447Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:15.273779 containerd[1495]: time="2025-02-13T15:28:15.273752312Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:15.273861 containerd[1495]: time="2025-02-13T15:28:15.273839368Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:15.273898 containerd[1495]: time="2025-02-13T15:28:15.273858525Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:15.275372 systemd[1]: run-netns-cni\x2db3a29ed7\x2d94ee\x2d468a\x2dc3cd\x2d49af23eed564.mount: Deactivated successfully. Feb 13 15:28:15.275469 systemd[1]: run-netns-cni\x2d5c8e3ffd\x2d3d1e\x2da105\x2d5ca5\x2d5699a496f413.mount: Deactivated successfully. Feb 13 15:28:15.276215 containerd[1495]: time="2025-02-13T15:28:15.276028771Z" level=info msg="TearDown network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" successfully" Feb 13 15:28:15.276215 containerd[1495]: time="2025-02-13T15:28:15.276054550Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" returns successfully" Feb 13 15:28:15.276751 containerd[1495]: time="2025-02-13T15:28:15.276351879Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:15.276751 containerd[1495]: time="2025-02-13T15:28:15.276428775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:5,}" Feb 13 15:28:15.276751 containerd[1495]: time="2025-02-13T15:28:15.276445998Z" level=info msg="TearDown network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" successfully" Feb 13 15:28:15.276751 containerd[1495]: time="2025-02-13T15:28:15.276460006Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" returns successfully" Feb 13 15:28:15.276850 containerd[1495]: time="2025-02-13T15:28:15.276792450Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:15.276942 containerd[1495]: time="2025-02-13T15:28:15.276920806Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:15.276973 containerd[1495]: time="2025-02-13T15:28:15.276940663Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:15.277196 containerd[1495]: time="2025-02-13T15:28:15.277168790Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:15.277288 containerd[1495]: time="2025-02-13T15:28:15.277260886Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:15.277288 containerd[1495]: time="2025-02-13T15:28:15.277284200Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:15.277593 containerd[1495]: time="2025-02-13T15:28:15.277556259Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:15.277672 containerd[1495]: time="2025-02-13T15:28:15.277646642Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:15.277672 containerd[1495]: time="2025-02-13T15:28:15.277666951Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:15.277946 containerd[1495]: time="2025-02-13T15:28:15.277914945Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:15.278018 containerd[1495]: time="2025-02-13T15:28:15.277995469Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:15.278018 containerd[1495]: time="2025-02-13T15:28:15.278004175Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:15.278276 containerd[1495]: time="2025-02-13T15:28:15.278253171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:6,}" Feb 13 15:28:15.926692 kubelet[1821]: E0213 15:28:15.926639 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:16.844261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995886224.mount: Deactivated successfully. Feb 13 15:28:16.927200 kubelet[1821]: E0213 15:28:16.927145 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:17.646837 containerd[1495]: time="2025-02-13T15:28:17.646775788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.757134 containerd[1495]: time="2025-02-13T15:28:17.757063103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 15:28:17.800525 containerd[1495]: time="2025-02-13T15:28:17.800452815Z" level=error msg="Failed to destroy network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.800978 containerd[1495]: time="2025-02-13T15:28:17.800939893Z" level=error msg="encountered an error cleaning up failed sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.801033 containerd[1495]: time="2025-02-13T15:28:17.801011940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.801266 kubelet[1821]: E0213 15:28:17.801241 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.801327 kubelet[1821]: E0213 15:28:17.801298 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:17.801327 kubelet[1821]: E0213 15:28:17.801324 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-qqssm" Feb 13 15:28:17.801394 kubelet[1821]: E0213 15:28:17.801380 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-qqssm_default(7bf5ff02-8e42-4454-9542-829060b4d158)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-qqssm" podUID="7bf5ff02-8e42-4454-9542-829060b4d158" Feb 13 15:28:17.805313 containerd[1495]: time="2025-02-13T15:28:17.805275848Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.816429 containerd[1495]: time="2025-02-13T15:28:17.816378934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:17.817024 containerd[1495]: time="2025-02-13T15:28:17.817001270Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.594025643s" Feb 13 15:28:17.817097 containerd[1495]: time="2025-02-13T15:28:17.817029354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 15:28:17.825460 containerd[1495]: time="2025-02-13T15:28:17.825424447Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 15:28:17.828362 containerd[1495]: time="2025-02-13T15:28:17.828327601Z" level=error msg="Failed to destroy network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.828756 containerd[1495]: time="2025-02-13T15:28:17.828709198Z" level=error msg="encountered an error cleaning up failed sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.828819 containerd[1495]: time="2025-02-13T15:28:17.828791235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.829111 kubelet[1821]: E0213 15:28:17.829084 1821 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 15:28:17.829200 kubelet[1821]: E0213 15:28:17.829139 1821 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:17.829200 kubelet[1821]: E0213 15:28:17.829160 1821 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5c8kb" Feb 13 15:28:17.829272 kubelet[1821]: E0213 15:28:17.829206 1821 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5c8kb_calico-system(6ec94504-4c12-4803-811c-f4f9cd9226c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5c8kb" podUID="6ec94504-4c12-4803-811c-f4f9cd9226c0" Feb 13 15:28:17.845831 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9-shm.mount: Deactivated successfully. Feb 13 15:28:17.845934 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee-shm.mount: Deactivated successfully. Feb 13 15:28:17.927683 kubelet[1821]: E0213 15:28:17.927589 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:18.084389 containerd[1495]: time="2025-02-13T15:28:18.084320598Z" level=info msg="CreateContainer within sandbox \"8bcfe825753c39d0a82076615094afb1a89f6863af7cead385cd5bab2086b1ce\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"77c50852341bf3c479e4fd50f9d0bcd78f025ee7c9fbee701e79b403f7804186\"" Feb 13 15:28:18.084984 containerd[1495]: time="2025-02-13T15:28:18.084936892Z" level=info msg="StartContainer for \"77c50852341bf3c479e4fd50f9d0bcd78f025ee7c9fbee701e79b403f7804186\"" Feb 13 15:28:18.117866 systemd[1]: Started cri-containerd-77c50852341bf3c479e4fd50f9d0bcd78f025ee7c9fbee701e79b403f7804186.scope - libcontainer container 77c50852341bf3c479e4fd50f9d0bcd78f025ee7c9fbee701e79b403f7804186. Feb 13 15:28:18.156827 containerd[1495]: time="2025-02-13T15:28:18.156768924Z" level=info msg="StartContainer for \"77c50852341bf3c479e4fd50f9d0bcd78f025ee7c9fbee701e79b403f7804186\" returns successfully" Feb 13 15:28:18.244222 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 15:28:18.244362 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 15:28:18.282390 kubelet[1821]: E0213 15:28:18.282345 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:18.286314 kubelet[1821]: I0213 15:28:18.286284 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9" Feb 13 15:28:18.287015 containerd[1495]: time="2025-02-13T15:28:18.286692428Z" level=info msg="StopPodSandbox for \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\"" Feb 13 15:28:18.287015 containerd[1495]: time="2025-02-13T15:28:18.286877912Z" level=info msg="Ensure that sandbox 1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9 in task-service has been cleanup successfully" Feb 13 15:28:18.290283 kubelet[1821]: I0213 15:28:18.290231 1821 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee" Feb 13 15:28:18.290859 containerd[1495]: time="2025-02-13T15:28:18.290809509Z" level=info msg="StopPodSandbox for \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\"" Feb 13 15:28:18.291102 containerd[1495]: time="2025-02-13T15:28:18.291068382Z" level=info msg="Ensure that sandbox f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee in task-service has been cleanup successfully" Feb 13 15:28:18.297685 kubelet[1821]: I0213 15:28:18.297646 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-sxsmq" podStartSLOduration=4.2670971810000005 podStartE2EDuration="25.297577016s" podCreationTimestamp="2025-02-13 15:27:53 +0000 UTC" firstStartedPulling="2025-02-13 15:27:56.787244573 +0000 UTC m=+4.247366088" lastFinishedPulling="2025-02-13 15:28:17.817724398 +0000 UTC m=+25.277845923" observedRunningTime="2025-02-13 15:28:18.297376084 +0000 UTC m=+25.757497599" watchObservedRunningTime="2025-02-13 15:28:18.297577016 +0000 UTC m=+25.757698531" Feb 13 15:28:18.326931 containerd[1495]: time="2025-02-13T15:28:18.326870320Z" level=info msg="TearDown network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" successfully" Feb 13 15:28:18.326931 containerd[1495]: time="2025-02-13T15:28:18.326923491Z" level=info msg="StopPodSandbox for \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" returns successfully" Feb 13 15:28:18.327267 containerd[1495]: time="2025-02-13T15:28:18.327008674Z" level=info msg="TearDown network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" successfully" Feb 13 15:28:18.327267 containerd[1495]: time="2025-02-13T15:28:18.327140655Z" level=info msg="StopPodSandbox for \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" returns successfully" Feb 13 15:28:18.327818 containerd[1495]: time="2025-02-13T15:28:18.327790733Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" Feb 13 15:28:18.328058 containerd[1495]: time="2025-02-13T15:28:18.327946840Z" level=info msg="TearDown network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" successfully" Feb 13 15:28:18.328058 containerd[1495]: time="2025-02-13T15:28:18.327960396Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" returns successfully" Feb 13 15:28:18.328058 containerd[1495]: time="2025-02-13T15:28:18.328013707Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" Feb 13 15:28:18.328124 containerd[1495]: time="2025-02-13T15:28:18.328091806Z" level=info msg="TearDown network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" successfully" Feb 13 15:28:18.328124 containerd[1495]: time="2025-02-13T15:28:18.328103879Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" returns successfully" Feb 13 15:28:18.328684 containerd[1495]: time="2025-02-13T15:28:18.328653145Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:18.328781 containerd[1495]: time="2025-02-13T15:28:18.328762003Z" level=info msg="TearDown network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" successfully" Feb 13 15:28:18.328814 containerd[1495]: time="2025-02-13T15:28:18.328779706Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" returns successfully" Feb 13 15:28:18.328848 containerd[1495]: time="2025-02-13T15:28:18.328826956Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:18.328966 containerd[1495]: time="2025-02-13T15:28:18.328938699Z" level=info msg="TearDown network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" successfully" Feb 13 15:28:18.328966 containerd[1495]: time="2025-02-13T15:28:18.328956021Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" returns successfully" Feb 13 15:28:18.329394 containerd[1495]: time="2025-02-13T15:28:18.329368317Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:18.329476 containerd[1495]: time="2025-02-13T15:28:18.329456265Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:18.329476 containerd[1495]: time="2025-02-13T15:28:18.329471944Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:18.329564 containerd[1495]: time="2025-02-13T15:28:18.329525817Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:18.329614 containerd[1495]: time="2025-02-13T15:28:18.329595149Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:18.329614 containerd[1495]: time="2025-02-13T15:28:18.329611269Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:18.330221 containerd[1495]: time="2025-02-13T15:28:18.330165094Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:18.330273 containerd[1495]: time="2025-02-13T15:28:18.330251148Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:18.330273 containerd[1495]: time="2025-02-13T15:28:18.330268801Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:18.330371 containerd[1495]: time="2025-02-13T15:28:18.330346240Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:18.330611 containerd[1495]: time="2025-02-13T15:28:18.330587649Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:18.330611 containerd[1495]: time="2025-02-13T15:28:18.330606915Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:18.330907 containerd[1495]: time="2025-02-13T15:28:18.330729058Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:18.330907 containerd[1495]: time="2025-02-13T15:28:18.330840350Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:18.330907 containerd[1495]: time="2025-02-13T15:28:18.330853595Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:18.331201 containerd[1495]: time="2025-02-13T15:28:18.331166522Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:18.331274 containerd[1495]: time="2025-02-13T15:28:18.331255090Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:18.331274 containerd[1495]: time="2025-02-13T15:28:18.331270269Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:18.331353 containerd[1495]: time="2025-02-13T15:28:18.331316938Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:18.331422 containerd[1495]: time="2025-02-13T15:28:18.331401749Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:18.331422 containerd[1495]: time="2025-02-13T15:28:18.331419072Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:18.332108 containerd[1495]: time="2025-02-13T15:28:18.332062637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:7,}" Feb 13 15:28:18.332245 containerd[1495]: time="2025-02-13T15:28:18.332215880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:6,}" Feb 13 15:28:18.591768 systemd-networkd[1413]: calie303a2ab05f: Link UP Feb 13 15:28:18.592022 systemd-networkd[1413]: calie303a2ab05f: Gained carrier Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.449 [INFO][2855] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.472 [INFO][2855] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.78-k8s-csi--node--driver--5c8kb-eth0 csi-node-driver- calico-system 6ec94504-4c12-4803-811c-f4f9cd9226c0 833 0 2025-02-13 15:27:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.78 csi-node-driver-5c8kb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie303a2ab05f [] []}} ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.472 [INFO][2855] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.507 [INFO][2891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" HandleID="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Workload="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.522 [INFO][2891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" HandleID="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Workload="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd430), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.78", "pod":"csi-node-driver-5c8kb", "timestamp":"2025-02-13 15:28:18.507422799 +0000 UTC"}, Hostname:"10.0.0.78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.522 [INFO][2891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.522 [INFO][2891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.522 [INFO][2891] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.78' Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.524 [INFO][2891] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.531 [INFO][2891] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.535 [INFO][2891] ipam/ipam.go 521: Ran out of existing affine blocks for host host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.537 [INFO][2891] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.540 [INFO][2891] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.540 [INFO][2891] ipam/ipam.go 550: Found unclaimed block host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.540 [INFO][2891] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.544 [INFO][2891] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.544 [INFO][2891] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.545 [INFO][2891] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.548 [INFO][2891] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.550 [INFO][2891] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.550 [INFO][2891] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.556 [INFO][2891] ipam/ipam_block_reader_writer.go 264: Successfully created block Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.556 [INFO][2891] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.564 [INFO][2891] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.564 [INFO][2891] ipam/ipam.go 585: Block '192.168.18.64/26' has 64 free ips which is more than 1 ips required. host="10.0.0.78" subnet=192.168.18.64/26 Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.564 [INFO][2891] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.566 [INFO][2891] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.573 [INFO][2891] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2891] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.64/26] block=192.168.18.64/26 handle="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" host="10.0.0.78" Feb 13 15:28:18.638667 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2891] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.64/26] handle="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" host="10.0.0.78" Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.64/26] IPv6=[] ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" HandleID="k8s-pod-network.d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Workload="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.583 [INFO][2855] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-csi--node--driver--5c8kb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ec94504-4c12-4803-811c-f4f9cd9226c0", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"", Pod:"csi-node-driver-5c8kb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie303a2ab05f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.584 [INFO][2855] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.64/32] ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.584 [INFO][2855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie303a2ab05f ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.592 [INFO][2855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.592 [INFO][2855] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-csi--node--driver--5c8kb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6ec94504-4c12-4803-811c-f4f9cd9226c0", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 27, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd", Pod:"csi-node-driver-5c8kb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie303a2ab05f", MAC:"ca:ef:b3:e9:bb:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:18.639480 containerd[1495]: 2025-02-13 15:28:18.636 [INFO][2855] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd" Namespace="calico-system" Pod="csi-node-driver-5c8kb" WorkloadEndpoint="10.0.0.78-k8s-csi--node--driver--5c8kb-eth0" Feb 13 15:28:18.669596 systemd-networkd[1413]: califd10d3992e9: Link UP Feb 13 15:28:18.670138 systemd-networkd[1413]: califd10d3992e9: Gained carrier Feb 13 15:28:18.765016 containerd[1495]: time="2025-02-13T15:28:18.764881782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:18.765016 containerd[1495]: time="2025-02-13T15:28:18.764940213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:18.765016 containerd[1495]: time="2025-02-13T15:28:18.764950924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:18.765646 containerd[1495]: time="2025-02-13T15:28:18.765034513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:18.786030 systemd[1]: Started cri-containerd-d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd.scope - libcontainer container d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd. Feb 13 15:28:18.806208 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:18.817763 containerd[1495]: time="2025-02-13T15:28:18.817704032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5c8kb,Uid:6ec94504-4c12-4803-811c-f4f9cd9226c0,Namespace:calico-system,Attempt:7,} returns sandbox id \"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd\"" Feb 13 15:28:18.818947 containerd[1495]: time="2025-02-13T15:28:18.818929877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 15:28:18.847380 systemd[1]: run-netns-cni\x2d1f902f50\x2d3e73\x2d2833\x2dd4a5\x2dec7e29aa86e8.mount: Deactivated successfully. Feb 13 15:28:18.847485 systemd[1]: run-netns-cni\x2dcaaa5064\x2d5643\x2df162\x2d1ca4\x2d17c41a6b2fc2.mount: Deactivated successfully. Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.469 [INFO][2867] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.479 [INFO][2867] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0 nginx-deployment-6d5f899847- default 7bf5ff02-8e42-4454-9542-829060b4d158 1048 0 2025-02-13 15:28:10 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.78 nginx-deployment-6d5f899847-qqssm eth0 default [] [] [kns.default ksa.default.default] califd10d3992e9 [] []}} ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.480 [INFO][2867] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.552 [INFO][2897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" HandleID="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Workload="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.563 [INFO][2897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" HandleID="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Workload="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddda0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.78", "pod":"nginx-deployment-6d5f899847-qqssm", "timestamp":"2025-02-13 15:28:18.552377276 +0000 UTC"}, Hostname:"10.0.0.78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.563 [INFO][2897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.579 [INFO][2897] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.78' Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.581 [INFO][2897] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.587 [INFO][2897] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.591 [INFO][2897] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.593 [INFO][2897] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.596 [INFO][2897] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.596 [INFO][2897] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.636 [INFO][2897] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701 Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.655 [INFO][2897] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.663 [INFO][2897] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.65/26] block=192.168.18.64/26 handle="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.663 [INFO][2897] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.65/26] handle="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" host="10.0.0.78" Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.663 [INFO][2897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:18.889282 containerd[1495]: 2025-02-13 15:28:18.663 [INFO][2897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.65/26] IPv6=[] ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" HandleID="k8s-pod-network.e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Workload="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.667 [INFO][2867] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7bf5ff02-8e42-4454-9542-829060b4d158", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"", Pod:"nginx-deployment-6d5f899847-qqssm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califd10d3992e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.667 [INFO][2867] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.65/32] ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.667 [INFO][2867] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd10d3992e9 ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.670 [INFO][2867] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.670 [INFO][2867] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7bf5ff02-8e42-4454-9542-829060b4d158", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701", Pod:"nginx-deployment-6d5f899847-qqssm", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califd10d3992e9", MAC:"0a:29:2b:a3:49:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:18.889945 containerd[1495]: 2025-02-13 15:28:18.886 [INFO][2867] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701" Namespace="default" Pod="nginx-deployment-6d5f899847-qqssm" WorkloadEndpoint="10.0.0.78-k8s-nginx--deployment--6d5f899847--qqssm-eth0" Feb 13 15:28:18.927971 kubelet[1821]: E0213 15:28:18.927882 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:19.296292 containerd[1495]: time="2025-02-13T15:28:19.296079759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:19.296292 containerd[1495]: time="2025-02-13T15:28:19.296133130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:19.296292 containerd[1495]: time="2025-02-13T15:28:19.296153600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:19.296619 containerd[1495]: time="2025-02-13T15:28:19.296256325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:19.302817 kubelet[1821]: E0213 15:28:19.302787 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:19.323037 systemd[1]: Started cri-containerd-e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701.scope - libcontainer container e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701. Feb 13 15:28:19.338221 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:19.375670 containerd[1495]: time="2025-02-13T15:28:19.375627328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-qqssm,Uid:7bf5ff02-8e42-4454-9542-829060b4d158,Namespace:default,Attempt:6,} returns sandbox id \"e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701\"" Feb 13 15:28:19.928848 kubelet[1821]: E0213 15:28:19.928795 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:20.520934 systemd-networkd[1413]: calie303a2ab05f: Gained IPv6LL Feb 13 15:28:20.649023 systemd-networkd[1413]: califd10d3992e9: Gained IPv6LL Feb 13 15:28:20.952879 kubelet[1821]: E0213 15:28:20.952599 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:21.168769 kernel: bpftool[3173]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 15:28:21.355844 containerd[1495]: time="2025-02-13T15:28:21.355720646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:21.356760 containerd[1495]: time="2025-02-13T15:28:21.356700096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 15:28:21.357999 containerd[1495]: time="2025-02-13T15:28:21.357957604Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:21.360211 containerd[1495]: time="2025-02-13T15:28:21.360182009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:21.361039 containerd[1495]: time="2025-02-13T15:28:21.361000393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.54204533s" Feb 13 15:28:21.361081 containerd[1495]: time="2025-02-13T15:28:21.361036432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 15:28:21.361812 containerd[1495]: time="2025-02-13T15:28:21.361786937Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:28:21.363129 containerd[1495]: time="2025-02-13T15:28:21.363101043Z" level=info msg="CreateContainer within sandbox \"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 15:28:21.386099 containerd[1495]: time="2025-02-13T15:28:21.386033366Z" level=info msg="CreateContainer within sandbox \"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"93726e264de4ae515f0cd19e318ee167fa624d429b707cd53d7d97de521beddf\"" Feb 13 15:28:21.386712 containerd[1495]: time="2025-02-13T15:28:21.386673563Z" level=info msg="StartContainer for \"93726e264de4ae515f0cd19e318ee167fa624d429b707cd53d7d97de521beddf\"" Feb 13 15:28:21.458711 systemd-networkd[1413]: vxlan.calico: Link UP Feb 13 15:28:21.458724 systemd-networkd[1413]: vxlan.calico: Gained carrier Feb 13 15:28:21.474902 systemd[1]: Started cri-containerd-93726e264de4ae515f0cd19e318ee167fa624d429b707cd53d7d97de521beddf.scope - libcontainer container 93726e264de4ae515f0cd19e318ee167fa624d429b707cd53d7d97de521beddf. Feb 13 15:28:21.513460 containerd[1495]: time="2025-02-13T15:28:21.513413679Z" level=info msg="StartContainer for \"93726e264de4ae515f0cd19e318ee167fa624d429b707cd53d7d97de521beddf\" returns successfully" Feb 13 15:28:21.953375 kubelet[1821]: E0213 15:28:21.953321 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:22.568902 systemd-networkd[1413]: vxlan.calico: Gained IPv6LL Feb 13 15:28:22.954172 kubelet[1821]: E0213 15:28:22.954117 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:23.864435 update_engine[1483]: I20250213 15:28:23.864297 1483 update_attempter.cc:509] Updating boot flags... Feb 13 15:28:23.915780 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2909) Feb 13 15:28:23.955775 kubelet[1821]: E0213 15:28:23.954912 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:23.964182 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2909) Feb 13 15:28:24.029806 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2909) Feb 13 15:28:24.955164 kubelet[1821]: E0213 15:28:24.955079 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:25.875320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3850601940.mount: Deactivated successfully. Feb 13 15:28:25.956216 kubelet[1821]: E0213 15:28:25.956155 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:26.957081 kubelet[1821]: E0213 15:28:26.957016 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:27.957853 kubelet[1821]: E0213 15:28:27.957809 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:28.195394 containerd[1495]: time="2025-02-13T15:28:28.195331134Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:28.216607 containerd[1495]: time="2025-02-13T15:28:28.216426125Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:28:28.234953 containerd[1495]: time="2025-02-13T15:28:28.234912665Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:28.250557 containerd[1495]: time="2025-02-13T15:28:28.250525543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:28.251505 containerd[1495]: time="2025-02-13T15:28:28.251437526Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.88960871s" Feb 13 15:28:28.251505 containerd[1495]: time="2025-02-13T15:28:28.251468966Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:28:28.252515 containerd[1495]: time="2025-02-13T15:28:28.252387001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 15:28:28.253113 containerd[1495]: time="2025-02-13T15:28:28.253078498Z" level=info msg="CreateContainer within sandbox \"e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:28:28.403118 containerd[1495]: time="2025-02-13T15:28:28.403055877Z" level=info msg="CreateContainer within sandbox \"e0f3d8de34de4a75e7f98cf7b029a34ae2d9b2274df9dd70ed10546b15b42701\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d49908025a6af978424e489440930881d144cea8bd3c57fa0f0aadee82d63024\"" Feb 13 15:28:28.403697 containerd[1495]: time="2025-02-13T15:28:28.403673465Z" level=info msg="StartContainer for \"d49908025a6af978424e489440930881d144cea8bd3c57fa0f0aadee82d63024\"" Feb 13 15:28:28.486989 systemd[1]: Started cri-containerd-d49908025a6af978424e489440930881d144cea8bd3c57fa0f0aadee82d63024.scope - libcontainer container d49908025a6af978424e489440930881d144cea8bd3c57fa0f0aadee82d63024. Feb 13 15:28:28.958395 kubelet[1821]: E0213 15:28:28.958344 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:29.921062 containerd[1495]: time="2025-02-13T15:28:29.921008883Z" level=info msg="StartContainer for \"d49908025a6af978424e489440930881d144cea8bd3c57fa0f0aadee82d63024\" returns successfully" Feb 13 15:28:29.959394 kubelet[1821]: E0213 15:28:29.959351 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:30.943132 kubelet[1821]: I0213 15:28:30.943064 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-qqssm" podStartSLOduration=12.06830984 podStartE2EDuration="20.943005364s" podCreationTimestamp="2025-02-13 15:28:10 +0000 UTC" firstStartedPulling="2025-02-13 15:28:19.37699473 +0000 UTC m=+26.837116245" lastFinishedPulling="2025-02-13 15:28:28.251690254 +0000 UTC m=+35.711811769" observedRunningTime="2025-02-13 15:28:30.942795558 +0000 UTC m=+38.402917073" watchObservedRunningTime="2025-02-13 15:28:30.943005364 +0000 UTC m=+38.403126880" Feb 13 15:28:30.960190 kubelet[1821]: E0213 15:28:30.960123 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:31.715567 kubelet[1821]: E0213 15:28:31.715527 1821 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:28:31.960299 kubelet[1821]: E0213 15:28:31.960251 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:32.908619 kubelet[1821]: E0213 15:28:32.908567 1821 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:32.961281 kubelet[1821]: E0213 15:28:32.961233 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:33.154869 containerd[1495]: time="2025-02-13T15:28:33.154802306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:33.168327 containerd[1495]: time="2025-02-13T15:28:33.168159903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 15:28:33.183874 containerd[1495]: time="2025-02-13T15:28:33.183816678Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:33.202130 containerd[1495]: time="2025-02-13T15:28:33.202088234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:33.202644 containerd[1495]: time="2025-02-13T15:28:33.202605420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 4.950190757s" Feb 13 15:28:33.202644 containerd[1495]: time="2025-02-13T15:28:33.202635206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 15:28:33.204361 containerd[1495]: time="2025-02-13T15:28:33.204322569Z" level=info msg="CreateContainer within sandbox \"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 15:28:33.393639 containerd[1495]: time="2025-02-13T15:28:33.393586608Z" level=info msg="CreateContainer within sandbox \"d0344d3845772487d855b4758734b1d6bed207f54882e7398f6b11ca69fe1bcd\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65527e2697df9717d99a3b73543d629ade22a2476fe32131021b1f1e9aad313a\"" Feb 13 15:28:33.394157 containerd[1495]: time="2025-02-13T15:28:33.394111729Z" level=info msg="StartContainer for \"65527e2697df9717d99a3b73543d629ade22a2476fe32131021b1f1e9aad313a\"" Feb 13 15:28:33.424866 systemd[1]: Started cri-containerd-65527e2697df9717d99a3b73543d629ade22a2476fe32131021b1f1e9aad313a.scope - libcontainer container 65527e2697df9717d99a3b73543d629ade22a2476fe32131021b1f1e9aad313a. Feb 13 15:28:33.581161 containerd[1495]: time="2025-02-13T15:28:33.581109202Z" level=info msg="StartContainer for \"65527e2697df9717d99a3b73543d629ade22a2476fe32131021b1f1e9aad313a\" returns successfully" Feb 13 15:28:33.961963 kubelet[1821]: E0213 15:28:33.961798 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:33.979110 kubelet[1821]: I0213 15:28:33.979078 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-5c8kb" podStartSLOduration=26.594674253 podStartE2EDuration="40.979035212s" podCreationTimestamp="2025-02-13 15:27:53 +0000 UTC" firstStartedPulling="2025-02-13 15:28:18.818751186 +0000 UTC m=+26.278872701" lastFinishedPulling="2025-02-13 15:28:33.203112145 +0000 UTC m=+40.663233660" observedRunningTime="2025-02-13 15:28:33.978883687 +0000 UTC m=+41.439005202" watchObservedRunningTime="2025-02-13 15:28:33.979035212 +0000 UTC m=+41.439156727" Feb 13 15:28:34.213560 kubelet[1821]: I0213 15:28:34.213453 1821 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 15:28:34.213560 kubelet[1821]: I0213 15:28:34.213488 1821 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 15:28:34.632352 kubelet[1821]: I0213 15:28:34.632209 1821 topology_manager.go:215] "Topology Admit Handler" podUID="b9467974-907d-452a-98ae-4363dcbb05a7" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:28:34.637898 systemd[1]: Created slice kubepods-besteffort-podb9467974_907d_452a_98ae_4363dcbb05a7.slice - libcontainer container kubepods-besteffort-podb9467974_907d_452a_98ae_4363dcbb05a7.slice. Feb 13 15:28:34.776908 kubelet[1821]: I0213 15:28:34.776860 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv78q\" (UniqueName: \"kubernetes.io/projected/b9467974-907d-452a-98ae-4363dcbb05a7-kube-api-access-sv78q\") pod \"nfs-server-provisioner-0\" (UID: \"b9467974-907d-452a-98ae-4363dcbb05a7\") " pod="default/nfs-server-provisioner-0" Feb 13 15:28:34.777053 kubelet[1821]: I0213 15:28:34.776926 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b9467974-907d-452a-98ae-4363dcbb05a7-data\") pod \"nfs-server-provisioner-0\" (UID: \"b9467974-907d-452a-98ae-4363dcbb05a7\") " pod="default/nfs-server-provisioner-0" Feb 13 15:28:34.941574 containerd[1495]: time="2025-02-13T15:28:34.941538460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b9467974-907d-452a-98ae-4363dcbb05a7,Namespace:default,Attempt:0,}" Feb 13 15:28:34.962754 kubelet[1821]: E0213 15:28:34.962711 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:35.222965 systemd-networkd[1413]: cali60e51b789ff: Link UP Feb 13 15:28:35.223844 systemd-networkd[1413]: cali60e51b789ff: Gained carrier Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.145 [INFO][3469] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.78-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default b9467974-907d-452a-98ae-4363dcbb05a7 1201 0 2025-02-13 15:28:34 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.78 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.145 [INFO][3469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.171 [INFO][3481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" HandleID="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Workload="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.179 [INFO][3481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" HandleID="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Workload="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004077a0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.78", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:28:35.171862685 +0000 UTC"}, Hostname:"10.0.0.78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.179 [INFO][3481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.179 [INFO][3481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.179 [INFO][3481] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.78' Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.181 [INFO][3481] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.184 [INFO][3481] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.188 [INFO][3481] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.190 [INFO][3481] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.192 [INFO][3481] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.192 [INFO][3481] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.193 [INFO][3481] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239 Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.203 [INFO][3481] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.217 [INFO][3481] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.67/26] block=192.168.18.64/26 handle="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.217 [INFO][3481] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.67/26] handle="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" host="10.0.0.78" Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.217 [INFO][3481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:35.243685 containerd[1495]: 2025-02-13 15:28:35.217 [INFO][3481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.67/26] IPv6=[] ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" HandleID="k8s-pod-network.1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Workload="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.244511 containerd[1495]: 2025-02-13 15:28:35.219 [INFO][3469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b9467974-907d-452a-98ae-4363dcbb05a7", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:35.244511 containerd[1495]: 2025-02-13 15:28:35.220 [INFO][3469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.67/32] ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.244511 containerd[1495]: 2025-02-13 15:28:35.220 [INFO][3469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.244511 containerd[1495]: 2025-02-13 15:28:35.223 [INFO][3469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.244734 containerd[1495]: 2025-02-13 15:28:35.224 [INFO][3469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"b9467974-907d-452a-98ae-4363dcbb05a7", ResourceVersion:"1201", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.18.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f6:43:9b:58:af:b2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:35.244734 containerd[1495]: 2025-02-13 15:28:35.241 [INFO][3469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.78-k8s-nfs--server--provisioner--0-eth0" Feb 13 15:28:35.277079 containerd[1495]: time="2025-02-13T15:28:35.276496051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:35.277079 containerd[1495]: time="2025-02-13T15:28:35.277061408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:35.277079 containerd[1495]: time="2025-02-13T15:28:35.277075213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:35.277250 containerd[1495]: time="2025-02-13T15:28:35.277155134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:35.300888 systemd[1]: Started cri-containerd-1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239.scope - libcontainer container 1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239. Feb 13 15:28:35.312571 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:35.342071 containerd[1495]: time="2025-02-13T15:28:35.341983467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b9467974-907d-452a-98ae-4363dcbb05a7,Namespace:default,Attempt:0,} returns sandbox id \"1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239\"" Feb 13 15:28:35.343812 containerd[1495]: time="2025-02-13T15:28:35.343749687Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:28:35.963767 kubelet[1821]: E0213 15:28:35.963709 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:36.904952 systemd-networkd[1413]: cali60e51b789ff: Gained IPv6LL Feb 13 15:28:36.964077 kubelet[1821]: E0213 15:28:36.964023 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:37.965171 kubelet[1821]: E0213 15:28:37.965132 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:38.965594 kubelet[1821]: E0213 15:28:38.965526 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:39.966213 kubelet[1821]: E0213 15:28:39.966167 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:40.806610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880997314.mount: Deactivated successfully. Feb 13 15:28:40.966884 kubelet[1821]: E0213 15:28:40.966846 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:41.967869 kubelet[1821]: E0213 15:28:41.967818 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:42.970436 kubelet[1821]: E0213 15:28:42.970376 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:43.970701 kubelet[1821]: E0213 15:28:43.970651 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:44.971391 kubelet[1821]: E0213 15:28:44.971328 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:45.971639 kubelet[1821]: E0213 15:28:45.971591 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:46.972030 kubelet[1821]: E0213 15:28:46.971976 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:47.163918 containerd[1495]: time="2025-02-13T15:28:47.163856705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:47.184566 containerd[1495]: time="2025-02-13T15:28:47.184464605Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 15:28:47.229878 containerd[1495]: time="2025-02-13T15:28:47.229706782Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:47.343149 containerd[1495]: time="2025-02-13T15:28:47.343082186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:47.344340 containerd[1495]: time="2025-02-13T15:28:47.344274778Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 12.000481839s" Feb 13 15:28:47.344340 containerd[1495]: time="2025-02-13T15:28:47.344330101Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:28:47.346372 containerd[1495]: time="2025-02-13T15:28:47.346337335Z" level=info msg="CreateContainer within sandbox \"1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:28:47.487342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716214980.mount: Deactivated successfully. Feb 13 15:28:47.668107 containerd[1495]: time="2025-02-13T15:28:47.668048050Z" level=info msg="CreateContainer within sandbox \"1566467ed4a317be568b3e248aa3e21984b9b471b706ba6b1260d7df8c2c3239\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"24ea7b9b12058ce56fc7e307348232b4e7bb1c1967c77f983e2186388ed53a00\"" Feb 13 15:28:47.668758 containerd[1495]: time="2025-02-13T15:28:47.668697762Z" level=info msg="StartContainer for \"24ea7b9b12058ce56fc7e307348232b4e7bb1c1967c77f983e2186388ed53a00\"" Feb 13 15:28:47.703880 systemd[1]: Started cri-containerd-24ea7b9b12058ce56fc7e307348232b4e7bb1c1967c77f983e2186388ed53a00.scope - libcontainer container 24ea7b9b12058ce56fc7e307348232b4e7bb1c1967c77f983e2186388ed53a00. Feb 13 15:28:47.800128 containerd[1495]: time="2025-02-13T15:28:47.799991760Z" level=info msg="StartContainer for \"24ea7b9b12058ce56fc7e307348232b4e7bb1c1967c77f983e2186388ed53a00\" returns successfully" Feb 13 15:28:47.972668 kubelet[1821]: E0213 15:28:47.972625 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:47.980467 kubelet[1821]: I0213 15:28:47.980434 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.979063925 podStartE2EDuration="13.980390005s" podCreationTimestamp="2025-02-13 15:28:34 +0000 UTC" firstStartedPulling="2025-02-13 15:28:35.343264562 +0000 UTC m=+42.803386077" lastFinishedPulling="2025-02-13 15:28:47.344590642 +0000 UTC m=+54.804712157" observedRunningTime="2025-02-13 15:28:47.980205629 +0000 UTC m=+55.440327144" watchObservedRunningTime="2025-02-13 15:28:47.980390005 +0000 UTC m=+55.440511520" Feb 13 15:28:48.973353 kubelet[1821]: E0213 15:28:48.973300 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:49.974054 kubelet[1821]: E0213 15:28:49.973999 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:50.974370 kubelet[1821]: E0213 15:28:50.974336 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:51.974803 kubelet[1821]: E0213 15:28:51.974764 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:52.908696 kubelet[1821]: E0213 15:28:52.908641 1821 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:52.923015 containerd[1495]: time="2025-02-13T15:28:52.922984330Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:52.923406 containerd[1495]: time="2025-02-13T15:28:52.923096851Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:52.923406 containerd[1495]: time="2025-02-13T15:28:52.923106719Z" level=info msg="StopPodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:52.923455 containerd[1495]: time="2025-02-13T15:28:52.923421340Z" level=info msg="RemovePodSandbox for \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:52.923455 containerd[1495]: time="2025-02-13T15:28:52.923444664Z" level=info msg="Forcibly stopping sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\"" Feb 13 15:28:52.923574 containerd[1495]: time="2025-02-13T15:28:52.923519795Z" level=info msg="TearDown network for sandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" successfully" Feb 13 15:28:52.952223 containerd[1495]: time="2025-02-13T15:28:52.952184103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:52.952321 containerd[1495]: time="2025-02-13T15:28:52.952236732Z" level=info msg="RemovePodSandbox \"f5b3ada3e9fbec16b5777d5d0376c2bb4221c7ce1c622487c02097168a69bc3b\" returns successfully" Feb 13 15:28:52.952639 containerd[1495]: time="2025-02-13T15:28:52.952594995Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:52.952753 containerd[1495]: time="2025-02-13T15:28:52.952717896Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:52.952800 containerd[1495]: time="2025-02-13T15:28:52.952734638Z" level=info msg="StopPodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:52.953009 containerd[1495]: time="2025-02-13T15:28:52.952986822Z" level=info msg="RemovePodSandbox for \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:52.953085 containerd[1495]: time="2025-02-13T15:28:52.953012320Z" level=info msg="Forcibly stopping sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\"" Feb 13 15:28:52.953144 containerd[1495]: time="2025-02-13T15:28:52.953103771Z" level=info msg="TearDown network for sandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" successfully" Feb 13 15:28:52.975870 kubelet[1821]: E0213 15:28:52.975828 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:52.977123 containerd[1495]: time="2025-02-13T15:28:52.976885447Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:52.977123 containerd[1495]: time="2025-02-13T15:28:52.976934820Z" level=info msg="RemovePodSandbox \"9c5513243ac88ec95ad476aab76d370a2d275403daac98aaddcc373286cd4510\" returns successfully" Feb 13 15:28:52.978916 containerd[1495]: time="2025-02-13T15:28:52.978883971Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:52.979140 containerd[1495]: time="2025-02-13T15:28:52.979007914Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:52.979140 containerd[1495]: time="2025-02-13T15:28:52.979028543Z" level=info msg="StopPodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:52.979471 containerd[1495]: time="2025-02-13T15:28:52.979449133Z" level=info msg="RemovePodSandbox for \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:52.980965 containerd[1495]: time="2025-02-13T15:28:52.979569359Z" level=info msg="Forcibly stopping sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\"" Feb 13 15:28:52.980965 containerd[1495]: time="2025-02-13T15:28:52.979667954Z" level=info msg="TearDown network for sandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" successfully" Feb 13 15:28:53.063640 containerd[1495]: time="2025-02-13T15:28:53.063590407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.063640 containerd[1495]: time="2025-02-13T15:28:53.063645030Z" level=info msg="RemovePodSandbox \"2b0373d796d71bf2d4e5abfdba53a8d0ab2c3554b8bcfb9af29d6ec95902123a\" returns successfully" Feb 13 15:28:53.064089 containerd[1495]: time="2025-02-13T15:28:53.064058927Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:53.064202 containerd[1495]: time="2025-02-13T15:28:53.064170197Z" level=info msg="TearDown network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" successfully" Feb 13 15:28:53.064202 containerd[1495]: time="2025-02-13T15:28:53.064187970Z" level=info msg="StopPodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" returns successfully" Feb 13 15:28:53.064449 containerd[1495]: time="2025-02-13T15:28:53.064416378Z" level=info msg="RemovePodSandbox for \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:53.064449 containerd[1495]: time="2025-02-13T15:28:53.064441976Z" level=info msg="Forcibly stopping sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\"" Feb 13 15:28:53.064563 containerd[1495]: time="2025-02-13T15:28:53.064519072Z" level=info msg="TearDown network for sandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" successfully" Feb 13 15:28:53.129603 containerd[1495]: time="2025-02-13T15:28:53.129554162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.129716 containerd[1495]: time="2025-02-13T15:28:53.129618704Z" level=info msg="RemovePodSandbox \"b96090a6063c85a80c45dad376e348a726b6203c0400de1d14f726ed08dfbcf8\" returns successfully" Feb 13 15:28:53.130060 containerd[1495]: time="2025-02-13T15:28:53.130035857Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" Feb 13 15:28:53.130178 containerd[1495]: time="2025-02-13T15:28:53.130155391Z" level=info msg="TearDown network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" successfully" Feb 13 15:28:53.130178 containerd[1495]: time="2025-02-13T15:28:53.130174657Z" level=info msg="StopPodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" returns successfully" Feb 13 15:28:53.130468 containerd[1495]: time="2025-02-13T15:28:53.130442691Z" level=info msg="RemovePodSandbox for \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" Feb 13 15:28:53.130546 containerd[1495]: time="2025-02-13T15:28:53.130467988Z" level=info msg="Forcibly stopping sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\"" Feb 13 15:28:53.130597 containerd[1495]: time="2025-02-13T15:28:53.130547038Z" level=info msg="TearDown network for sandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" successfully" Feb 13 15:28:53.149258 containerd[1495]: time="2025-02-13T15:28:53.149136856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.149258 containerd[1495]: time="2025-02-13T15:28:53.149181289Z" level=info msg="RemovePodSandbox \"90d175e759c1cafafe015d36cec19db50c87999f6cbd087262070f416bcdfb62\" returns successfully" Feb 13 15:28:53.149509 containerd[1495]: time="2025-02-13T15:28:53.149476203Z" level=info msg="StopPodSandbox for \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\"" Feb 13 15:28:53.149590 containerd[1495]: time="2025-02-13T15:28:53.149569438Z" level=info msg="TearDown network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" successfully" Feb 13 15:28:53.149632 containerd[1495]: time="2025-02-13T15:28:53.149588023Z" level=info msg="StopPodSandbox for \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" returns successfully" Feb 13 15:28:53.149886 containerd[1495]: time="2025-02-13T15:28:53.149861928Z" level=info msg="RemovePodSandbox for \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\"" Feb 13 15:28:53.149947 containerd[1495]: time="2025-02-13T15:28:53.149888127Z" level=info msg="Forcibly stopping sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\"" Feb 13 15:28:53.149998 containerd[1495]: time="2025-02-13T15:28:53.149966804Z" level=info msg="TearDown network for sandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" successfully" Feb 13 15:28:53.177631 containerd[1495]: time="2025-02-13T15:28:53.177536361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.177631 containerd[1495]: time="2025-02-13T15:28:53.177577087Z" level=info msg="RemovePodSandbox \"f5633c80af761908a5b4b2b32e77b1b33bf9cacc3ec3fd420c765718f3355eee\" returns successfully" Feb 13 15:28:53.177934 containerd[1495]: time="2025-02-13T15:28:53.177895766Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:53.178011 containerd[1495]: time="2025-02-13T15:28:53.177992167Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:53.178011 containerd[1495]: time="2025-02-13T15:28:53.178005462Z" level=info msg="StopPodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:53.178258 containerd[1495]: time="2025-02-13T15:28:53.178224774Z" level=info msg="RemovePodSandbox for \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:53.178258 containerd[1495]: time="2025-02-13T15:28:53.178251574Z" level=info msg="Forcibly stopping sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\"" Feb 13 15:28:53.178387 containerd[1495]: time="2025-02-13T15:28:53.178327246Z" level=info msg="TearDown network for sandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" successfully" Feb 13 15:28:53.193010 containerd[1495]: time="2025-02-13T15:28:53.192985010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.193086 containerd[1495]: time="2025-02-13T15:28:53.193016779Z" level=info msg="RemovePodSandbox \"88eff125ed329781ac6966b8bafc601b531d00e9c472f0d107fc725f239c99b9\" returns successfully" Feb 13 15:28:53.193301 containerd[1495]: time="2025-02-13T15:28:53.193272279Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:53.193387 containerd[1495]: time="2025-02-13T15:28:53.193363951Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:53.193387 containerd[1495]: time="2025-02-13T15:28:53.193381214Z" level=info msg="StopPodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:53.194281 containerd[1495]: time="2025-02-13T15:28:53.194233675Z" level=info msg="RemovePodSandbox for \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:53.194281 containerd[1495]: time="2025-02-13T15:28:53.194261457Z" level=info msg="Forcibly stopping sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\"" Feb 13 15:28:53.194373 containerd[1495]: time="2025-02-13T15:28:53.194332811Z" level=info msg="TearDown network for sandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" successfully" Feb 13 15:28:53.214074 containerd[1495]: time="2025-02-13T15:28:53.214021174Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.214074 containerd[1495]: time="2025-02-13T15:28:53.214058794Z" level=info msg="RemovePodSandbox \"783ab3cab4ee3eca858e0910259de5469727b1ebb84b9dfff10aa54dc395791e\" returns successfully" Feb 13 15:28:53.214307 containerd[1495]: time="2025-02-13T15:28:53.214284097Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:53.214399 containerd[1495]: time="2025-02-13T15:28:53.214379226Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:53.214443 containerd[1495]: time="2025-02-13T15:28:53.214396439Z" level=info msg="StopPodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:53.215669 containerd[1495]: time="2025-02-13T15:28:53.214613315Z" level=info msg="RemovePodSandbox for \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:53.215669 containerd[1495]: time="2025-02-13T15:28:53.214638873Z" level=info msg="Forcibly stopping sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\"" Feb 13 15:28:53.215669 containerd[1495]: time="2025-02-13T15:28:53.214711571Z" level=info msg="TearDown network for sandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" successfully" Feb 13 15:28:53.252331 containerd[1495]: time="2025-02-13T15:28:53.252278556Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.252331 containerd[1495]: time="2025-02-13T15:28:53.252329752Z" level=info msg="RemovePodSandbox \"6ff165ccf2d77d03c0393a382a7a1981febb503cc6026c3c6723d975b507b23c\" returns successfully" Feb 13 15:28:53.252809 containerd[1495]: time="2025-02-13T15:28:53.252782913Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:53.252937 containerd[1495]: time="2025-02-13T15:28:53.252888341Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:53.252937 containerd[1495]: time="2025-02-13T15:28:53.252934037Z" level=info msg="StopPodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:53.253205 containerd[1495]: time="2025-02-13T15:28:53.253182673Z" level=info msg="RemovePodSandbox for \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:53.253268 containerd[1495]: time="2025-02-13T15:28:53.253206338Z" level=info msg="Forcibly stopping sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\"" Feb 13 15:28:53.253318 containerd[1495]: time="2025-02-13T15:28:53.253276881Z" level=info msg="TearDown network for sandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" successfully" Feb 13 15:28:53.298962 containerd[1495]: time="2025-02-13T15:28:53.298917431Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.298962 containerd[1495]: time="2025-02-13T15:28:53.298960532Z" level=info msg="RemovePodSandbox \"7cba264e65e7e5f1536749acc9df7326036aa529ccc12668c4923791546ab529\" returns successfully" Feb 13 15:28:53.299262 containerd[1495]: time="2025-02-13T15:28:53.299238855Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:53.299376 containerd[1495]: time="2025-02-13T15:28:53.299326830Z" level=info msg="TearDown network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" successfully" Feb 13 15:28:53.299376 containerd[1495]: time="2025-02-13T15:28:53.299371144Z" level=info msg="StopPodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" returns successfully" Feb 13 15:28:53.299644 containerd[1495]: time="2025-02-13T15:28:53.299614861Z" level=info msg="RemovePodSandbox for \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:53.299644 containerd[1495]: time="2025-02-13T15:28:53.299634247Z" level=info msg="Forcibly stopping sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\"" Feb 13 15:28:53.299722 containerd[1495]: time="2025-02-13T15:28:53.299692437Z" level=info msg="TearDown network for sandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" successfully" Feb 13 15:28:53.321975 containerd[1495]: time="2025-02-13T15:28:53.321918156Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.322029 containerd[1495]: time="2025-02-13T15:28:53.321988277Z" level=info msg="RemovePodSandbox \"49f1196780b8e0339457d215d602d2dc53ee15be785d54d0ef9b507b9f6be0a3\" returns successfully" Feb 13 15:28:53.322356 containerd[1495]: time="2025-02-13T15:28:53.322330780Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" Feb 13 15:28:53.322476 containerd[1495]: time="2025-02-13T15:28:53.322432551Z" level=info msg="TearDown network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" successfully" Feb 13 15:28:53.322476 containerd[1495]: time="2025-02-13T15:28:53.322470653Z" level=info msg="StopPodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" returns successfully" Feb 13 15:28:53.322861 containerd[1495]: time="2025-02-13T15:28:53.322835890Z" level=info msg="RemovePodSandbox for \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" Feb 13 15:28:53.322918 containerd[1495]: time="2025-02-13T15:28:53.322863221Z" level=info msg="Forcibly stopping sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\"" Feb 13 15:28:53.323008 containerd[1495]: time="2025-02-13T15:28:53.322963349Z" level=info msg="TearDown network for sandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" successfully" Feb 13 15:28:53.343617 containerd[1495]: time="2025-02-13T15:28:53.343583180Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.343679 containerd[1495]: time="2025-02-13T15:28:53.343632123Z" level=info msg="RemovePodSandbox \"ea9766a98a29ed507fc9f0b17c3c0bb97f9e63eb6bdaeaf3ebb5fce0e56546d4\" returns successfully" Feb 13 15:28:53.343999 containerd[1495]: time="2025-02-13T15:28:53.343973895Z" level=info msg="StopPodSandbox for \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\"" Feb 13 15:28:53.344116 containerd[1495]: time="2025-02-13T15:28:53.344098008Z" level=info msg="TearDown network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" successfully" Feb 13 15:28:53.344157 containerd[1495]: time="2025-02-13T15:28:53.344115100Z" level=info msg="StopPodSandbox for \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" returns successfully" Feb 13 15:28:53.344395 containerd[1495]: time="2025-02-13T15:28:53.344372704Z" level=info msg="RemovePodSandbox for \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\"" Feb 13 15:28:53.344395 containerd[1495]: time="2025-02-13T15:28:53.344394475Z" level=info msg="Forcibly stopping sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\"" Feb 13 15:28:53.344500 containerd[1495]: time="2025-02-13T15:28:53.344471279Z" level=info msg="TearDown network for sandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" successfully" Feb 13 15:28:53.412937 containerd[1495]: time="2025-02-13T15:28:53.412891709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:28:53.412937 containerd[1495]: time="2025-02-13T15:28:53.412944398Z" level=info msg="RemovePodSandbox \"1f1b29e69b25874b131043ccdb4a51202320c60b2c3ba21b6a0dfeb1c12b2cf9\" returns successfully" Feb 13 15:28:53.976351 kubelet[1821]: E0213 15:28:53.976282 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:54.977453 kubelet[1821]: E0213 15:28:54.977419 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:55.978094 kubelet[1821]: E0213 15:28:55.978038 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:56.979189 kubelet[1821]: E0213 15:28:56.979137 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:57.563896 kubelet[1821]: I0213 15:28:57.563849 1821 topology_manager.go:215] "Topology Admit Handler" podUID="22da445f-caab-44bd-a955-5a5339376f4c" podNamespace="default" podName="test-pod-1" Feb 13 15:28:57.569247 systemd[1]: Created slice kubepods-besteffort-pod22da445f_caab_44bd_a955_5a5339376f4c.slice - libcontainer container kubepods-besteffort-pod22da445f_caab_44bd_a955_5a5339376f4c.slice. Feb 13 15:28:57.686700 kubelet[1821]: I0213 15:28:57.686669 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2716f216-32a8-4e2e-bd75-25b09b5815fe\" (UniqueName: \"kubernetes.io/nfs/22da445f-caab-44bd-a955-5a5339376f4c-pvc-2716f216-32a8-4e2e-bd75-25b09b5815fe\") pod \"test-pod-1\" (UID: \"22da445f-caab-44bd-a955-5a5339376f4c\") " pod="default/test-pod-1" Feb 13 15:28:57.686700 kubelet[1821]: I0213 15:28:57.686708 1821 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8nr6\" (UniqueName: \"kubernetes.io/projected/22da445f-caab-44bd-a955-5a5339376f4c-kube-api-access-q8nr6\") pod \"test-pod-1\" (UID: \"22da445f-caab-44bd-a955-5a5339376f4c\") " pod="default/test-pod-1" Feb 13 15:28:57.820773 kernel: FS-Cache: Loaded Feb 13 15:28:57.887253 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:28:57.887354 kernel: RPC: Registered udp transport module. Feb 13 15:28:57.887382 kernel: RPC: Registered tcp transport module. Feb 13 15:28:57.887858 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:28:57.889350 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:28:57.979979 kubelet[1821]: E0213 15:28:57.979908 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:58.150083 kernel: NFS: Registering the id_resolver key type Feb 13 15:28:58.150206 kernel: Key type id_resolver registered Feb 13 15:28:58.150227 kernel: Key type id_legacy registered Feb 13 15:28:58.177563 nfsidmap[3685]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:28:58.181748 nfsidmap[3688]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:28:58.472409 containerd[1495]: time="2025-02-13T15:28:58.472365576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:22da445f-caab-44bd-a955-5a5339376f4c,Namespace:default,Attempt:0,}" Feb 13 15:28:58.690716 systemd-networkd[1413]: cali5ec59c6bf6e: Link UP Feb 13 15:28:58.691702 systemd-networkd[1413]: cali5ec59c6bf6e: Gained carrier Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.598 [INFO][3692] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.78-k8s-test--pod--1-eth0 default 22da445f-caab-44bd-a955-5a5339376f4c 1316 0 2025-02-13 15:28:34 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.78 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.598 [INFO][3692] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.627 [INFO][3705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" HandleID="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Workload="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.635 [INFO][3705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" HandleID="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Workload="10.0.0.78-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002deb60), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.78", "pod":"test-pod-1", "timestamp":"2025-02-13 15:28:58.627170639 +0000 UTC"}, Hostname:"10.0.0.78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.635 [INFO][3705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.635 [INFO][3705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.635 [INFO][3705] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.78' Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.636 [INFO][3705] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.640 [INFO][3705] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.643 [INFO][3705] ipam/ipam.go 489: Trying affinity for 192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.645 [INFO][3705] ipam/ipam.go 155: Attempting to load block cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.647 [INFO][3705] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.18.64/26 host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.647 [INFO][3705] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.18.64/26 handle="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.648 [INFO][3705] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097 Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.665 [INFO][3705] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.18.64/26 handle="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.685 [INFO][3705] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.18.68/26] block=192.168.18.64/26 handle="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.685 [INFO][3705] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.18.68/26] handle="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" host="10.0.0.78" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.685 [INFO][3705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.685 [INFO][3705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.18.68/26] IPv6=[] ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" HandleID="k8s-pod-network.9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Workload="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.779561 containerd[1495]: 2025-02-13 15:28:58.688 [INFO][3692] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"22da445f-caab-44bd-a955-5a5339376f4c", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:58.780334 containerd[1495]: 2025-02-13 15:28:58.688 [INFO][3692] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.18.68/32] ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.780334 containerd[1495]: 2025-02-13 15:28:58.688 [INFO][3692] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.780334 containerd[1495]: 2025-02-13 15:28:58.691 [INFO][3692] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.780334 containerd[1495]: 2025-02-13 15:28:58.691 [INFO][3692] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.78-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"22da445f-caab-44bd-a955-5a5339376f4c", ResourceVersion:"1316", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 28, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.78", ContainerID:"9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.18.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"52:47:3d:5a:a0:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 15:28:58.780334 containerd[1495]: 2025-02-13 15:28:58.776 [INFO][3692] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.78-k8s-test--pod--1-eth0" Feb 13 15:28:58.932883 containerd[1495]: time="2025-02-13T15:28:58.932126698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:28:58.932883 containerd[1495]: time="2025-02-13T15:28:58.932684867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:28:58.932883 containerd[1495]: time="2025-02-13T15:28:58.932699304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:58.932883 containerd[1495]: time="2025-02-13T15:28:58.932798590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:28:58.956065 systemd[1]: Started cri-containerd-9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097.scope - libcontainer container 9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097. Feb 13 15:28:58.967353 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:28:58.981122 kubelet[1821]: E0213 15:28:58.981084 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:28:58.990961 containerd[1495]: time="2025-02-13T15:28:58.990915918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:22da445f-caab-44bd-a955-5a5339376f4c,Namespace:default,Attempt:0,} returns sandbox id \"9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097\"" Feb 13 15:28:58.992704 containerd[1495]: time="2025-02-13T15:28:58.992421215Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:28:59.671014 containerd[1495]: time="2025-02-13T15:28:59.670951306Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:28:59.679809 containerd[1495]: time="2025-02-13T15:28:59.679751107Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:28:59.682721 containerd[1495]: time="2025-02-13T15:28:59.682654448Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 690.205632ms" Feb 13 15:28:59.682721 containerd[1495]: time="2025-02-13T15:28:59.682695465Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:28:59.684378 containerd[1495]: time="2025-02-13T15:28:59.684356713Z" level=info msg="CreateContainer within sandbox \"9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:28:59.741026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2316601552.mount: Deactivated successfully. Feb 13 15:28:59.803189 containerd[1495]: time="2025-02-13T15:28:59.803148515Z" level=info msg="CreateContainer within sandbox \"9926450d0ae5ec06d11f7e718bed290233664b40162aeb2e8ac8a720c24f2097\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9cea7dc5b31e8295c56396f8946e895dd1c561141c54d2c6049f0cea5992e22c\"" Feb 13 15:28:59.803638 containerd[1495]: time="2025-02-13T15:28:59.803612325Z" level=info msg="StartContainer for \"9cea7dc5b31e8295c56396f8946e895dd1c561141c54d2c6049f0cea5992e22c\"" Feb 13 15:28:59.834864 systemd[1]: Started cri-containerd-9cea7dc5b31e8295c56396f8946e895dd1c561141c54d2c6049f0cea5992e22c.scope - libcontainer container 9cea7dc5b31e8295c56396f8946e895dd1c561141c54d2c6049f0cea5992e22c. Feb 13 15:28:59.897684 containerd[1495]: time="2025-02-13T15:28:59.897637409Z" level=info msg="StartContainer for \"9cea7dc5b31e8295c56396f8946e895dd1c561141c54d2c6049f0cea5992e22c\" returns successfully" Feb 13 15:28:59.945909 systemd-networkd[1413]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 15:28:59.981627 kubelet[1821]: E0213 15:28:59.981582 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:29:00.067228 kubelet[1821]: I0213 15:29:00.067200 1821 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.376288684 podStartE2EDuration="26.067158872s" podCreationTimestamp="2025-02-13 15:28:34 +0000 UTC" firstStartedPulling="2025-02-13 15:28:58.992082579 +0000 UTC m=+66.452204104" lastFinishedPulling="2025-02-13 15:28:59.682952777 +0000 UTC m=+67.143074292" observedRunningTime="2025-02-13 15:29:00.067080375 +0000 UTC m=+67.527201890" watchObservedRunningTime="2025-02-13 15:29:00.067158872 +0000 UTC m=+67.527280387" Feb 13 15:29:00.982291 kubelet[1821]: E0213 15:29:00.982240 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:29:01.982833 kubelet[1821]: E0213 15:29:01.982788 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:29:02.983782 kubelet[1821]: E0213 15:29:02.983714 1821 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"