Feb 13 15:24:34.870113 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 15:24:34.870133 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:34.870144 kernel: BIOS-provided physical RAM map: Feb 13 15:24:34.870150 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:24:34.870156 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:24:34.870162 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:24:34.870169 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:24:34.870176 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:24:34.870181 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:24:34.870187 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:24:34.870196 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 15:24:34.870201 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:24:34.870207 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:24:34.870214 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:24:34.870221 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:24:34.870228 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:24:34.870236 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:24:34.870243 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:24:34.870249 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:24:34.870256 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:24:34.870262 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:24:34.870269 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:24:34.870275 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:24:34.870282 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:24:34.870288 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:24:34.870295 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:24:34.870301 kernel: NX (Execute Disable) protection: active Feb 13 15:24:34.870310 kernel: APIC: Static calls initialized Feb 13 15:24:34.870316 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:24:34.870323 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 15:24:34.870329 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:24:34.870336 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 15:24:34.870342 kernel: extended physical RAM map: Feb 13 15:24:34.870348 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 15:24:34.870355 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 15:24:34.870361 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 15:24:34.870368 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 15:24:34.870375 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 15:24:34.870383 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 15:24:34.870390 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 15:24:34.870400 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 15:24:34.870406 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 15:24:34.870413 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 15:24:34.870420 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 15:24:34.870426 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 15:24:34.870435 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 15:24:34.870442 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 15:24:34.870449 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 15:24:34.870456 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 15:24:34.870462 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 15:24:34.870469 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 15:24:34.870476 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 15:24:34.870482 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 15:24:34.870489 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 15:24:34.870498 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 15:24:34.870505 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 15:24:34.870511 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 15:24:34.870518 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 15:24:34.870525 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 15:24:34.870531 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 15:24:34.870538 kernel: efi: EFI v2.7 by EDK II Feb 13 15:24:34.870554 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 15:24:34.870561 kernel: random: crng init done Feb 13 15:24:34.870568 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 15:24:34.870575 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 15:24:34.870583 kernel: secureboot: Secure boot disabled Feb 13 15:24:34.870590 kernel: SMBIOS 2.8 present. Feb 13 15:24:34.870597 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 15:24:34.870603 kernel: Hypervisor detected: KVM Feb 13 15:24:34.870610 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 15:24:34.870617 kernel: kvm-clock: using sched offset of 2734753973 cycles Feb 13 15:24:34.870624 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 15:24:34.870631 kernel: tsc: Detected 2794.748 MHz processor Feb 13 15:24:34.870638 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 15:24:34.870645 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 15:24:34.870652 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 15:24:34.870661 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 15:24:34.870668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 15:24:34.870675 kernel: Using GB pages for direct mapping Feb 13 15:24:34.870682 kernel: ACPI: Early table checksum verification disabled Feb 13 15:24:34.870689 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 15:24:34.870696 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:24:34.870703 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870710 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870717 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 15:24:34.870726 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870733 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870740 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870747 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:24:34.870759 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 15:24:34.870766 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 15:24:34.870773 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 15:24:34.870780 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 15:24:34.870786 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 15:24:34.870796 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 15:24:34.870834 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 15:24:34.870841 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 15:24:34.870847 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 15:24:34.870854 kernel: No NUMA configuration found Feb 13 15:24:34.870861 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 15:24:34.870868 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 15:24:34.870875 kernel: Zone ranges: Feb 13 15:24:34.870882 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 15:24:34.870891 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 15:24:34.870898 kernel: Normal empty Feb 13 15:24:34.870905 kernel: Movable zone start for each node Feb 13 15:24:34.870912 kernel: Early memory node ranges Feb 13 15:24:34.870919 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 15:24:34.870925 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 15:24:34.870932 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 15:24:34.870939 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 15:24:34.870946 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 15:24:34.870955 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 15:24:34.870961 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 15:24:34.870968 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 15:24:34.870975 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 15:24:34.870982 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:24:34.870989 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 15:24:34.871003 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 15:24:34.871012 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 15:24:34.871019 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 15:24:34.871026 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 15:24:34.871033 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 15:24:34.871041 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 15:24:34.871050 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 15:24:34.871057 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 15:24:34.871064 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 15:24:34.871071 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 15:24:34.871078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 15:24:34.871088 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 15:24:34.871100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 15:24:34.871107 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 15:24:34.871114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 15:24:34.871121 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 15:24:34.871128 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 15:24:34.871135 kernel: TSC deadline timer available Feb 13 15:24:34.871142 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 15:24:34.871150 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 15:24:34.871159 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 15:24:34.871166 kernel: kvm-guest: setup PV sched yield Feb 13 15:24:34.871174 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 15:24:34.871181 kernel: Booting paravirtualized kernel on KVM Feb 13 15:24:34.871188 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 15:24:34.871195 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 15:24:34.871202 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 15:24:34.871210 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 15:24:34.871217 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 15:24:34.871226 kernel: kvm-guest: PV spinlocks enabled Feb 13 15:24:34.871233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 15:24:34.871241 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:34.871249 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:24:34.871256 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:24:34.871264 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:24:34.871271 kernel: Fallback order for Node 0: 0 Feb 13 15:24:34.871278 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 15:24:34.871285 kernel: Policy zone: DMA32 Feb 13 15:24:34.871294 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:24:34.871302 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 15:24:34.871309 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:24:34.871316 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 15:24:34.871324 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 15:24:34.871331 kernel: Dynamic Preempt: voluntary Feb 13 15:24:34.871338 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:24:34.871346 kernel: rcu: RCU event tracing is enabled. Feb 13 15:24:34.871353 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:24:34.871363 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:24:34.871370 kernel: Rude variant of Tasks RCU enabled. Feb 13 15:24:34.871377 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:24:34.871384 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:24:34.871392 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:24:34.871399 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 15:24:34.871406 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:24:34.871413 kernel: Console: colour dummy device 80x25 Feb 13 15:24:34.871420 kernel: printk: console [ttyS0] enabled Feb 13 15:24:34.871429 kernel: ACPI: Core revision 20230628 Feb 13 15:24:34.871437 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 15:24:34.871444 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 15:24:34.871451 kernel: x2apic enabled Feb 13 15:24:34.871458 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 15:24:34.871466 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 15:24:34.871473 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 15:24:34.871480 kernel: kvm-guest: setup PV IPIs Feb 13 15:24:34.871488 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 15:24:34.871497 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 15:24:34.871505 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 15:24:34.871512 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 15:24:34.871519 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 15:24:34.871526 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 15:24:34.871534 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 15:24:34.871541 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 15:24:34.871556 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 15:24:34.871564 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 15:24:34.871573 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 15:24:34.871580 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 15:24:34.871588 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 15:24:34.871595 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 15:24:34.871602 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 15:24:34.871610 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 15:24:34.871617 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 15:24:34.871624 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 15:24:34.871634 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 15:24:34.871641 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 15:24:34.871648 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 15:24:34.871655 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 15:24:34.871663 kernel: Freeing SMP alternatives memory: 32K Feb 13 15:24:34.871670 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:24:34.871677 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:24:34.871685 kernel: landlock: Up and running. Feb 13 15:24:34.871692 kernel: SELinux: Initializing. Feb 13 15:24:34.871701 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:34.871709 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:24:34.871716 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 15:24:34.871723 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:34.871730 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:34.871738 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:24:34.871745 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 15:24:34.871752 kernel: ... version: 0 Feb 13 15:24:34.871759 kernel: ... bit width: 48 Feb 13 15:24:34.871768 kernel: ... generic registers: 6 Feb 13 15:24:34.871775 kernel: ... value mask: 0000ffffffffffff Feb 13 15:24:34.871783 kernel: ... max period: 00007fffffffffff Feb 13 15:24:34.871790 kernel: ... fixed-purpose events: 0 Feb 13 15:24:34.871797 kernel: ... event mask: 000000000000003f Feb 13 15:24:34.871816 kernel: signal: max sigframe size: 1776 Feb 13 15:24:34.871823 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:24:34.871831 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:24:34.871838 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:24:34.871848 kernel: smpboot: x86: Booting SMP configuration: Feb 13 15:24:34.871855 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 15:24:34.871862 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:24:34.871869 kernel: smpboot: Max logical packages: 1 Feb 13 15:24:34.871876 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 15:24:34.871883 kernel: devtmpfs: initialized Feb 13 15:24:34.871890 kernel: x86/mm: Memory block size: 128MB Feb 13 15:24:34.871897 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 15:24:34.871905 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 15:24:34.871914 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 15:24:34.871921 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 15:24:34.871929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 15:24:34.871936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 15:24:34.871943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:24:34.871951 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:24:34.871958 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:24:34.871965 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:24:34.871972 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:24:34.871982 kernel: audit: type=2000 audit(1739460274.781:1): state=initialized audit_enabled=0 res=1 Feb 13 15:24:34.871989 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:24:34.871996 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 15:24:34.872003 kernel: cpuidle: using governor menu Feb 13 15:24:34.872010 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:24:34.872017 kernel: dca service started, version 1.12.1 Feb 13 15:24:34.872025 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 15:24:34.872032 kernel: PCI: Using configuration type 1 for base access Feb 13 15:24:34.872039 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 15:24:34.872049 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:24:34.872056 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:24:34.872063 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:24:34.872070 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:24:34.872077 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:24:34.872085 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:24:34.872092 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:24:34.872099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:24:34.872106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:24:34.872115 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 15:24:34.872123 kernel: ACPI: Interpreter enabled Feb 13 15:24:34.872130 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 15:24:34.872137 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 15:24:34.872144 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 15:24:34.872151 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 15:24:34.872159 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 15:24:34.872166 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:24:34.872339 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:24:34.872472 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 15:24:34.872605 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 15:24:34.872615 kernel: PCI host bridge to bus 0000:00 Feb 13 15:24:34.872738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 15:24:34.872867 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 15:24:34.872979 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 15:24:34.873093 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 15:24:34.873202 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 15:24:34.873311 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:24:34.873419 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:24:34.873570 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 15:24:34.873707 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 15:24:34.873848 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 15:24:34.873971 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 15:24:34.874091 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 15:24:34.874210 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 15:24:34.874329 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 15:24:34.874458 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:24:34.874588 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 15:24:34.874713 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 15:24:34.874852 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 15:24:34.874982 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 15:24:34.875103 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 15:24:34.875225 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 15:24:34.875347 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 15:24:34.875475 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 15:24:34.875614 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 15:24:34.875735 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 15:24:34.875888 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 15:24:34.876014 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 15:24:34.876142 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 15:24:34.876262 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 15:24:34.876388 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 15:24:34.876512 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 15:24:34.876642 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 15:24:34.876792 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 15:24:34.877030 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 15:24:34.877041 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 15:24:34.877049 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 15:24:34.877056 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 15:24:34.877067 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 15:24:34.877075 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 15:24:34.877082 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 15:24:34.877089 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 15:24:34.877096 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 15:24:34.877103 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 15:24:34.877111 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 15:24:34.877118 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 15:24:34.877125 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 15:24:34.877135 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 15:24:34.877142 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 15:24:34.877149 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 15:24:34.877156 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 15:24:34.877163 kernel: iommu: Default domain type: Translated Feb 13 15:24:34.877171 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 15:24:34.877178 kernel: efivars: Registered efivars operations Feb 13 15:24:34.877185 kernel: PCI: Using ACPI for IRQ routing Feb 13 15:24:34.877192 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 15:24:34.877201 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 15:24:34.877208 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 15:24:34.877215 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 15:24:34.877222 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 15:24:34.877230 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 15:24:34.877237 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 15:24:34.877244 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 15:24:34.877251 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 15:24:34.877368 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 15:24:34.877489 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 15:24:34.877615 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 15:24:34.877626 kernel: vgaarb: loaded Feb 13 15:24:34.877634 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 15:24:34.877642 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 15:24:34.877650 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 15:24:34.877658 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:24:34.877666 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:24:34.877677 kernel: pnp: PnP ACPI init Feb 13 15:24:34.877864 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 15:24:34.877875 kernel: pnp: PnP ACPI: found 6 devices Feb 13 15:24:34.877883 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 15:24:34.877890 kernel: NET: Registered PF_INET protocol family Feb 13 15:24:34.877916 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:24:34.877925 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:24:34.877933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:24:34.877942 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:24:34.877950 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:24:34.877957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:24:34.877965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:34.877973 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:24:34.877980 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:24:34.877987 kernel: NET: Registered PF_XDP protocol family Feb 13 15:24:34.878109 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 15:24:34.878280 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 15:24:34.878400 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 15:24:34.878538 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 15:24:34.878669 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 15:24:34.878897 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 15:24:34.879039 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 15:24:34.879178 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 15:24:34.879190 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:24:34.879199 kernel: Initialise system trusted keyrings Feb 13 15:24:34.879211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:24:34.879219 kernel: Key type asymmetric registered Feb 13 15:24:34.879227 kernel: Asymmetric key parser 'x509' registered Feb 13 15:24:34.879234 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 15:24:34.879242 kernel: io scheduler mq-deadline registered Feb 13 15:24:34.879249 kernel: io scheduler kyber registered Feb 13 15:24:34.879257 kernel: io scheduler bfq registered Feb 13 15:24:34.879264 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 15:24:34.879272 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 15:24:34.879283 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 15:24:34.879293 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 15:24:34.879300 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:24:34.879308 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 15:24:34.879316 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 15:24:34.879323 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 15:24:34.879333 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 15:24:34.879457 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 15:24:34.879470 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 15:24:34.879591 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 15:24:34.879704 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T15:24:34 UTC (1739460274) Feb 13 15:24:34.879834 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 15:24:34.879845 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 15:24:34.879859 kernel: efifb: probing for efifb Feb 13 15:24:34.879867 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 15:24:34.879875 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 15:24:34.879882 kernel: efifb: scrolling: redraw Feb 13 15:24:34.879890 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 15:24:34.879898 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 15:24:34.879905 kernel: fb0: EFI VGA frame buffer device Feb 13 15:24:34.879913 kernel: pstore: Using crash dump compression: deflate Feb 13 15:24:34.879921 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 15:24:34.879929 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:24:34.879939 kernel: Segment Routing with IPv6 Feb 13 15:24:34.879946 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:24:34.879954 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:24:34.879962 kernel: Key type dns_resolver registered Feb 13 15:24:34.879969 kernel: IPI shorthand broadcast: enabled Feb 13 15:24:34.879977 kernel: sched_clock: Marking stable (589002686, 158425496)->(804091899, -56663717) Feb 13 15:24:34.879985 kernel: registered taskstats version 1 Feb 13 15:24:34.879993 kernel: Loading compiled-in X.509 certificates Feb 13 15:24:34.880001 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 15:24:34.880011 kernel: Key type .fscrypt registered Feb 13 15:24:34.880018 kernel: Key type fscrypt-provisioning registered Feb 13 15:24:34.880026 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:24:34.880034 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:24:34.880041 kernel: ima: No architecture policies found Feb 13 15:24:34.880049 kernel: clk: Disabling unused clocks Feb 13 15:24:34.880057 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 15:24:34.880065 kernel: Write protecting the kernel read-only data: 36864k Feb 13 15:24:34.880075 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 15:24:34.880083 kernel: Run /init as init process Feb 13 15:24:34.880090 kernel: with arguments: Feb 13 15:24:34.880098 kernel: /init Feb 13 15:24:34.880105 kernel: with environment: Feb 13 15:24:34.880113 kernel: HOME=/ Feb 13 15:24:34.880120 kernel: TERM=linux Feb 13 15:24:34.880128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:24:34.880138 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:24:34.880150 systemd[1]: Detected virtualization kvm. Feb 13 15:24:34.880158 systemd[1]: Detected architecture x86-64. Feb 13 15:24:34.880166 systemd[1]: Running in initrd. Feb 13 15:24:34.880174 systemd[1]: No hostname configured, using default hostname. Feb 13 15:24:34.880182 systemd[1]: Hostname set to . Feb 13 15:24:34.880191 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:34.880199 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:24:34.880207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:34.880218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:34.880227 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:24:34.880235 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:34.880244 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:24:34.880252 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:24:34.880262 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:24:34.880273 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:24:34.880282 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:34.880290 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:34.880298 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:34.880306 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:34.880314 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:34.880323 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:34.880331 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:34.880339 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:34.880349 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:24:34.880358 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:24:34.880366 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:34.880374 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:34.880382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:34.880391 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:34.880399 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:24:34.880407 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:34.880418 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:24:34.880426 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:24:34.880434 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:34.880442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:34.880450 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:34.880459 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:34.880467 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:34.880475 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:24:34.880505 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 15:24:34.880527 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:34.880536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:34.880544 systemd-journald[194]: Journal started Feb 13 15:24:34.880571 systemd-journald[194]: Runtime Journal (/run/log/journal/2b0589a7264e4321a31fe63a93ad136a) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:24:34.890882 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:34.892234 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 15:24:34.894401 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:34.893876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:34.899188 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:34.899986 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:34.903651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:34.915408 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:34.922935 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:34.929826 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:24:34.931828 kernel: Bridge firewalling registered Feb 13 15:24:34.931877 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 15:24:34.931985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:24:34.934058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:34.937154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:34.943131 dracut-cmdline[225]: dracut-dracut-053 Feb 13 15:24:34.946025 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 15:24:34.949764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:34.958970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:34.988963 systemd-resolved[249]: Positive Trust Anchors: Feb 13 15:24:34.988979 systemd-resolved[249]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:34.989011 systemd-resolved[249]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:34.991470 systemd-resolved[249]: Defaulting to hostname 'linux'. Feb 13 15:24:34.992506 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:34.998651 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:35.043837 kernel: SCSI subsystem initialized Feb 13 15:24:35.052829 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:24:35.074836 kernel: iscsi: registered transport (tcp) Feb 13 15:24:35.095978 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:24:35.096001 kernel: QLogic iSCSI HBA Driver Feb 13 15:24:35.149848 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:35.161933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:24:35.186036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:24:35.186092 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:24:35.187082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:24:35.229828 kernel: raid6: avx2x4 gen() 20487 MB/s Feb 13 15:24:35.246823 kernel: raid6: avx2x2 gen() 18814 MB/s Feb 13 15:24:35.274893 kernel: raid6: avx2x1 gen() 15703 MB/s Feb 13 15:24:35.274923 kernel: raid6: using algorithm avx2x4 gen() 20487 MB/s Feb 13 15:24:35.292944 kernel: raid6: .... xor() 7125 MB/s, rmw enabled Feb 13 15:24:35.292981 kernel: raid6: using avx2x2 recovery algorithm Feb 13 15:24:35.313842 kernel: xor: automatically using best checksumming function avx Feb 13 15:24:35.466834 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:24:35.481161 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:35.499008 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:35.514981 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 15:24:35.520448 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:35.529213 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:24:35.545447 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Feb 13 15:24:35.580857 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:35.597146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:35.685214 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:35.698598 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:24:35.715463 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:35.717616 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:35.721260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:35.723939 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:35.733162 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 15:24:35.753427 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:24:35.753603 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 15:24:35.753615 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:24:35.753626 kernel: GPT:9289727 != 19775487 Feb 13 15:24:35.753636 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:24:35.753646 kernel: GPT:9289727 != 19775487 Feb 13 15:24:35.753655 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:24:35.753665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:35.735037 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:24:35.745078 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:35.745193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:35.747414 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:35.751927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:35.752095 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:35.753895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:35.772190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:35.778500 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (462) Feb 13 15:24:35.772641 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:35.785084 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (475) Feb 13 15:24:35.785110 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 15:24:35.785124 kernel: AES CTR mode by8 optimization enabled Feb 13 15:24:35.786958 kernel: libata version 3.00 loaded. Feb 13 15:24:35.798122 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:24:35.801967 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 15:24:35.815979 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 15:24:35.816004 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 15:24:35.816153 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 15:24:35.816302 kernel: scsi host0: ahci Feb 13 15:24:35.816457 kernel: scsi host1: ahci Feb 13 15:24:35.816611 kernel: scsi host2: ahci Feb 13 15:24:35.816782 kernel: scsi host3: ahci Feb 13 15:24:35.816948 kernel: scsi host4: ahci Feb 13 15:24:35.817095 kernel: scsi host5: ahci Feb 13 15:24:35.817238 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 15:24:35.817249 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 15:24:35.817259 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 15:24:35.817268 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 15:24:35.817278 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 15:24:35.817288 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 15:24:35.807150 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:24:35.816945 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:24:35.818436 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:24:35.826635 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:24:35.839945 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:24:35.841349 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:35.841404 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:35.844470 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:35.846572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:35.854140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:35.854191 disk-uuid[554]: Primary Header is updated. Feb 13 15:24:35.854191 disk-uuid[554]: Secondary Entries is updated. Feb 13 15:24:35.854191 disk-uuid[554]: Secondary Header is updated. Feb 13 15:24:35.866178 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:35.879132 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:24:35.898694 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:36.126856 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 15:24:36.126947 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:36.127823 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:36.128843 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:36.128924 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 15:24:36.129828 kernel: ata3.00: applying bridge limits Feb 13 15:24:36.130827 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:36.130849 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 15:24:36.131833 kernel: ata3.00: configured for UDMA/100 Feb 13 15:24:36.132836 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 15:24:36.194835 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 15:24:36.207679 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 15:24:36.207701 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 15:24:36.861835 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:24:36.862200 disk-uuid[557]: The operation has completed successfully. Feb 13 15:24:36.892278 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:24:36.892409 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:24:36.913128 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:24:36.915963 sh[597]: Success Feb 13 15:24:36.927822 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 15:24:36.962485 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:24:36.978747 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:24:36.981768 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:24:37.008701 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 15:24:37.008751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:37.008763 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:24:37.010461 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:24:37.010478 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:24:37.015316 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:24:37.016029 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:24:37.023985 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:24:37.026202 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:24:37.033400 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:37.033428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:37.033445 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:37.035827 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:37.044544 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:24:37.046715 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:37.056304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:24:37.062006 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:24:37.132297 ignition[681]: Ignition 2.20.0 Feb 13 15:24:37.132308 ignition[681]: Stage: fetch-offline Feb 13 15:24:37.132347 ignition[681]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:37.132357 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:37.132448 ignition[681]: parsed url from cmdline: "" Feb 13 15:24:37.132452 ignition[681]: no config URL provided Feb 13 15:24:37.132458 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:24:37.132466 ignition[681]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:24:37.132503 ignition[681]: op(1): [started] loading QEMU firmware config module Feb 13 15:24:37.132508 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:24:37.139569 ignition[681]: op(1): [finished] loading QEMU firmware config module Feb 13 15:24:37.139592 ignition[681]: QEMU firmware config was not found. Ignoring... Feb 13 15:24:37.141956 ignition[681]: parsing config with SHA512: 49f31eba896fe29678f2f8d5d067baee5972e5590c2570ae9596c46616c46d1522751fad38b619b38167a59b27f0ec161d5acaeaffb4ba5357085053d6d4e3c2 Feb 13 15:24:37.147399 unknown[681]: fetched base config from "system" Feb 13 15:24:37.147416 unknown[681]: fetched user config from "qemu" Feb 13 15:24:37.147761 ignition[681]: fetch-offline: fetch-offline passed Feb 13 15:24:37.147852 ignition[681]: Ignition finished successfully Feb 13 15:24:37.150109 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:37.164128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:37.175962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:37.198241 systemd-networkd[786]: lo: Link UP Feb 13 15:24:37.198252 systemd-networkd[786]: lo: Gained carrier Feb 13 15:24:37.199947 systemd-networkd[786]: Enumeration completed Feb 13 15:24:37.200068 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:37.200336 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:37.200340 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:37.201212 systemd-networkd[786]: eth0: Link UP Feb 13 15:24:37.201216 systemd-networkd[786]: eth0: Gained carrier Feb 13 15:24:37.201222 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:37.202313 systemd[1]: Reached target network.target - Network. Feb 13 15:24:37.204215 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:24:37.211941 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:24:37.224861 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:37.226606 ignition[789]: Ignition 2.20.0 Feb 13 15:24:37.226617 ignition[789]: Stage: kargs Feb 13 15:24:37.226766 ignition[789]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:37.226778 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:37.227405 ignition[789]: kargs: kargs passed Feb 13 15:24:37.231127 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:24:37.227443 ignition[789]: Ignition finished successfully Feb 13 15:24:37.239046 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:24:37.254034 ignition[799]: Ignition 2.20.0 Feb 13 15:24:37.254045 ignition[799]: Stage: disks Feb 13 15:24:37.254203 ignition[799]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:37.254214 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:37.257719 ignition[799]: disks: disks passed Feb 13 15:24:37.257767 ignition[799]: Ignition finished successfully Feb 13 15:24:37.260692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:24:37.262781 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:37.263043 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:24:37.263369 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:37.263712 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:37.269744 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:37.280913 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:24:37.294944 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:24:37.301542 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:24:37.306941 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:24:37.403851 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 15:24:37.403876 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:24:37.405553 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:24:37.417947 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:37.420830 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:24:37.425647 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Feb 13 15:24:37.422416 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:24:37.432575 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:37.432599 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:37.432615 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:37.432629 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:37.422460 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:24:37.422498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:37.430365 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:24:37.433628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:37.437721 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:24:37.472624 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:24:37.477093 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:24:37.480623 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:24:37.483728 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:24:37.572098 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:37.580953 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:24:37.582822 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:24:37.589836 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:37.614851 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:24:37.618931 ignition[931]: INFO : Ignition 2.20.0 Feb 13 15:24:37.618931 ignition[931]: INFO : Stage: mount Feb 13 15:24:37.620536 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:37.620536 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:37.623189 ignition[931]: INFO : mount: mount passed Feb 13 15:24:37.623976 ignition[931]: INFO : Ignition finished successfully Feb 13 15:24:37.626613 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:24:37.634926 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:24:38.007714 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:24:38.019927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:24:38.025816 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 15:24:38.028406 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 15:24:38.028419 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 15:24:38.028429 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:24:38.030822 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:24:38.031782 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:24:38.054553 ignition[962]: INFO : Ignition 2.20.0 Feb 13 15:24:38.054553 ignition[962]: INFO : Stage: files Feb 13 15:24:38.056489 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:38.056489 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:38.056489 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:24:38.056489 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:24:38.056489 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:24:38.063212 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:24:38.063212 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:24:38.066438 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 15:24:38.067687 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:24:38.069953 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:38.071940 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:24:38.074107 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:38.076097 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:24:38.077972 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:24:38.080822 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:24:38.083594 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:24:38.086031 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 15:24:38.416002 systemd-networkd[786]: eth0: Gained IPv6LL Feb 13 15:24:38.485560 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:24:39.211408 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 15:24:39.211408 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 15:24:39.215748 ignition[962]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:24:39.215748 ignition[962]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:24:39.215748 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 15:24:39.215748 ignition[962]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:24:39.294716 ignition[962]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:24:39.300520 ignition[962]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:24:39.302234 ignition[962]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:24:39.302234 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:39.302234 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:24:39.302234 ignition[962]: INFO : files: files passed Feb 13 15:24:39.302234 ignition[962]: INFO : Ignition finished successfully Feb 13 15:24:39.303906 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:24:39.312025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:24:39.314201 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:24:39.317789 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:24:39.317949 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:24:39.325962 initrd-setup-root-after-ignition[991]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:24:39.328689 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:39.330366 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:39.332082 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:24:39.331555 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:39.333636 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:24:39.345966 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:24:39.370504 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:24:39.370642 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:24:39.371856 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:24:39.374911 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:24:39.376845 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:24:39.377629 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:24:39.396790 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:39.411097 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:24:39.420213 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:39.421495 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:39.423855 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:24:39.425853 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:24:39.426044 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:24:39.428241 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:24:39.429865 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:24:39.432204 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:24:39.434614 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:24:39.437034 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:24:39.439607 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:24:39.442163 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:24:39.444865 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:24:39.447336 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:24:39.450008 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:24:39.451871 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:24:39.452016 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:24:39.454294 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:39.455694 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:39.457790 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:24:39.457921 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:39.460129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:24:39.460234 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:24:39.462601 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:24:39.462708 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:24:39.464549 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:24:39.466255 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:24:39.470924 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:39.472952 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:24:39.474554 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:24:39.476473 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:24:39.476590 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:24:39.479028 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:24:39.479112 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:24:39.481190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:24:39.481310 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:24:39.483300 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:24:39.483405 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:24:39.496999 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:24:39.498852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:24:39.500076 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:24:39.500229 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:39.501263 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:24:39.501395 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:24:39.506324 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:24:39.506471 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:24:39.518638 ignition[1017]: INFO : Ignition 2.20.0 Feb 13 15:24:39.518638 ignition[1017]: INFO : Stage: umount Feb 13 15:24:39.520570 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:24:39.520570 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:24:39.520570 ignition[1017]: INFO : umount: umount passed Feb 13 15:24:39.520570 ignition[1017]: INFO : Ignition finished successfully Feb 13 15:24:39.521590 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:24:39.521714 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:24:39.523033 systemd[1]: Stopped target network.target - Network. Feb 13 15:24:39.525195 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:24:39.525249 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:24:39.527127 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:24:39.527173 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:24:39.529252 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:24:39.529298 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:24:39.531180 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:24:39.531226 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:24:39.533224 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:24:39.535098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:24:39.538261 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:24:39.538845 systemd-networkd[786]: eth0: DHCPv6 lease lost Feb 13 15:24:39.541621 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:24:39.541751 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:24:39.544087 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:24:39.544199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:24:39.547633 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:24:39.547689 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:39.560911 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:24:39.561900 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:24:39.561954 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:24:39.564203 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:24:39.564253 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:39.566504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:24:39.566555 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:39.568963 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:24:39.569012 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:39.571158 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:39.580265 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:24:39.580392 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:24:39.595722 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:24:39.595934 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:39.598161 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:24:39.598211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:39.600164 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:24:39.600206 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:39.602175 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:24:39.602224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:24:39.604303 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:24:39.604349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:24:39.606267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:24:39.606314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:24:39.614960 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:24:39.615020 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:24:39.615074 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:39.615411 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:24:39.615464 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:39.615783 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:24:39.615840 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:39.616213 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:39.616253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:39.622900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:24:39.623006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:24:39.670632 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:24:39.670787 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:24:39.672754 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:24:39.674369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:24:39.674423 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:24:39.685045 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:24:39.691982 systemd[1]: Switching root. Feb 13 15:24:39.719719 systemd-journald[194]: Journal stopped Feb 13 15:24:40.751615 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 15:24:40.751685 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:24:40.751702 kernel: SELinux: policy capability open_perms=1 Feb 13 15:24:40.751716 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:24:40.751736 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:24:40.751755 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:24:40.751769 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:24:40.751784 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:24:40.751820 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:24:40.751848 kernel: audit: type=1403 audit(1739460280.017:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:24:40.751863 systemd[1]: Successfully loaded SELinux policy in 39.467ms. Feb 13 15:24:40.751881 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.214ms. Feb 13 15:24:40.751900 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:24:40.751915 systemd[1]: Detected virtualization kvm. Feb 13 15:24:40.751930 systemd[1]: Detected architecture x86-64. Feb 13 15:24:40.751944 systemd[1]: Detected first boot. Feb 13 15:24:40.751959 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:24:40.751973 zram_generator::config[1061]: No configuration found. Feb 13 15:24:40.751996 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:24:40.752011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:24:40.752026 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:24:40.752042 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:24:40.752059 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:24:40.752076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:24:40.752092 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:24:40.752109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:24:40.752130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:24:40.752147 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:24:40.752164 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:24:40.752180 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:24:40.752197 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:24:40.752214 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:24:40.752230 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:24:40.752246 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:24:40.752263 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:24:40.752285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:24:40.752301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 15:24:40.752317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:24:40.752333 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:24:40.752349 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:24:40.752365 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:24:40.752382 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:24:40.752412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:24:40.752430 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:24:40.752446 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:24:40.752463 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:24:40.752481 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:24:40.752497 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:24:40.752513 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:24:40.752529 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:24:40.752545 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:24:40.752562 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:24:40.752581 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:24:40.752597 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:24:40.752614 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:24:40.752630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:40.752646 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:24:40.752662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:24:40.752678 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:24:40.752694 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:24:40.752714 systemd[1]: Reached target machines.target - Containers. Feb 13 15:24:40.752730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:24:40.752746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:40.752763 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:24:40.752781 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:24:40.752817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:40.752846 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:40.752866 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:40.752882 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:24:40.752902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:40.752919 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:24:40.752935 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:24:40.752951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:24:40.752967 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:24:40.752984 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:24:40.752999 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:24:40.753015 kernel: loop: module loaded Feb 13 15:24:40.753034 kernel: fuse: init (API version 7.39) Feb 13 15:24:40.753049 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:24:40.753065 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:24:40.753081 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:24:40.753097 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:24:40.753113 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:24:40.753129 systemd[1]: Stopped verity-setup.service. Feb 13 15:24:40.753146 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:40.753184 systemd-journald[1131]: Collecting audit messages is disabled. Feb 13 15:24:40.753222 kernel: ACPI: bus type drm_connector registered Feb 13 15:24:40.753238 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:24:40.753256 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:24:40.753272 systemd-journald[1131]: Journal started Feb 13 15:24:40.753304 systemd-journald[1131]: Runtime Journal (/run/log/journal/2b0589a7264e4321a31fe63a93ad136a) is 6.0M, max 48.3M, 42.2M free. Feb 13 15:24:40.514119 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:24:40.532886 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:24:40.533309 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:24:40.755936 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:24:40.757256 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:24:40.758423 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:24:40.759628 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:24:40.760858 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:24:40.762119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:24:40.763654 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:24:40.765188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:24:40.765370 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:24:40.766864 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:40.767044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:40.768480 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:40.768658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:40.770351 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:40.770579 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:40.772252 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:24:40.772439 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:24:40.773900 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:40.774075 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:40.775526 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:24:40.777329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:24:40.778924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:24:40.794359 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:24:40.803897 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:24:40.806238 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:24:40.807395 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:24:40.807434 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:24:40.809610 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:24:40.812050 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:24:40.815360 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:24:40.816582 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:40.819772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:24:40.823474 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:24:40.824833 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:40.825907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:24:40.827295 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:40.833190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:24:40.838232 systemd-journald[1131]: Time spent on flushing to /var/log/journal/2b0589a7264e4321a31fe63a93ad136a is 17.955ms for 1026 entries. Feb 13 15:24:40.838232 systemd-journald[1131]: System Journal (/var/log/journal/2b0589a7264e4321a31fe63a93ad136a) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:24:40.879166 systemd-journald[1131]: Received client request to flush runtime journal. Feb 13 15:24:40.879227 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 15:24:40.836925 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:24:40.842263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:24:40.846280 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:24:40.847775 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:24:40.849357 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:24:40.867114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:24:40.868859 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:24:40.871742 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:24:40.880974 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:24:40.884196 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:24:40.886034 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:24:40.887798 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:24:40.900275 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:24:40.903880 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:24:40.903900 systemd-tmpfiles[1176]: ACLs are not supported, ignoring. Feb 13 15:24:40.909641 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:24:40.910362 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:24:40.912014 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:24:40.914836 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:24:40.919942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:24:40.943843 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 15:24:40.946737 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:24:40.954973 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:24:40.971614 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 15:24:40.972002 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 15:24:40.978730 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:24:41.029841 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 15:24:41.073830 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 15:24:41.088857 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 15:24:41.099832 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 15:24:41.105263 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:24:41.107480 (sd-merge)[1203]: Merged extensions into '/usr'. Feb 13 15:24:41.139611 systemd[1]: Reloading requested from client PID 1175 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:24:41.139631 systemd[1]: Reloading... Feb 13 15:24:41.230572 zram_generator::config[1230]: No configuration found. Feb 13 15:24:41.329443 ldconfig[1170]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:24:41.362908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:41.415343 systemd[1]: Reloading finished in 275 ms. Feb 13 15:24:41.445820 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:24:41.447349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:24:41.466983 systemd[1]: Starting ensure-sysext.service... Feb 13 15:24:41.468972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:24:41.481889 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:24:41.481900 systemd[1]: Reloading... Feb 13 15:24:41.508284 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:24:41.508658 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:24:41.509635 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:24:41.509957 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Feb 13 15:24:41.510029 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Feb 13 15:24:41.525894 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:41.525912 systemd-tmpfiles[1267]: Skipping /boot Feb 13 15:24:41.548913 zram_generator::config[1293]: No configuration found. Feb 13 15:24:41.554584 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:24:41.554599 systemd-tmpfiles[1267]: Skipping /boot Feb 13 15:24:41.660945 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:41.709243 systemd[1]: Reloading finished in 226 ms. Feb 13 15:24:41.728200 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:24:41.740174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:24:41.748510 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:41.750931 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:24:41.753206 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:24:41.757997 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:24:41.762043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:24:41.768880 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:24:41.772372 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.772599 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:41.773870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:41.779058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:41.781696 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:41.784454 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:41.787373 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:24:41.788633 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.792525 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:41.792845 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:41.794839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:41.795014 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:41.797094 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:41.797319 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:41.807568 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:24:41.812986 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 15:24:41.814937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:41.815161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:41.825648 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:24:41.827335 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:24:41.830055 augenrules[1366]: No rules Feb 13 15:24:41.832067 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:41.832330 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:41.836007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.836230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:41.842033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:41.845740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:41.850027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:41.851212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:41.851342 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.852023 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:24:41.854118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:24:41.855977 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:24:41.858193 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:24:41.860024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:41.860206 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:41.861937 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:41.862115 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:41.864025 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:41.864202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:41.878667 systemd[1]: Finished ensure-sysext.service. Feb 13 15:24:41.881627 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.892072 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:41.894041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:24:41.898952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:24:41.901057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:24:41.904038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:24:41.906282 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:24:41.907505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:24:41.910941 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:24:41.915017 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:24:41.916142 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:24:41.916172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 15:24:41.916909 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:24:41.917079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:24:41.918528 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:24:41.918741 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:24:41.927762 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 15:24:41.931583 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:24:41.931839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:24:41.935108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:24:41.949250 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:24:41.949458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:24:41.951452 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:24:41.956550 systemd-resolved[1335]: Positive Trust Anchors: Feb 13 15:24:41.956864 augenrules[1404]: /sbin/augenrules: No change Feb 13 15:24:41.956942 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:24:41.956974 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:24:41.964727 systemd-resolved[1335]: Defaulting to hostname 'linux'. Feb 13 15:24:41.968967 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:24:41.970537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:24:42.003133 augenrules[1439]: No rules Feb 13 15:24:42.004022 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:42.004308 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:42.020841 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1386) Feb 13 15:24:42.038266 systemd-networkd[1419]: lo: Link UP Feb 13 15:24:42.038499 systemd-networkd[1419]: lo: Gained carrier Feb 13 15:24:42.042864 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:24:42.044286 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:24:42.044695 systemd-networkd[1419]: Enumeration completed Feb 13 15:24:42.045329 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:42.045335 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:24:42.045359 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:24:42.046604 systemd[1]: Reached target network.target - Network. Feb 13 15:24:42.048404 systemd-networkd[1419]: eth0: Link UP Feb 13 15:24:42.048419 systemd-networkd[1419]: eth0: Gained carrier Feb 13 15:24:42.048444 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:24:42.052895 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 15:24:42.052995 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:24:42.061864 kernel: ACPI: button: Power Button [PWRF] Feb 13 15:24:42.061917 systemd-networkd[1419]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:24:42.062788 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Feb 13 15:24:43.000489 systemd-resolved[1335]: Clock change detected. Flushing caches. Feb 13 15:24:43.000902 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:24:43.000996 systemd-timesyncd[1421]: Initial clock synchronization to Thu 2025-02-13 15:24:43.000441 UTC. Feb 13 15:24:43.029753 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 15:24:43.058673 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 15:24:43.059043 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 15:24:43.059286 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 15:24:43.059631 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 15:24:43.104023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:43.108016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:24:43.138400 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:24:43.143940 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:24:43.144158 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:43.147817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:24:43.154832 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:24:43.178811 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 15:24:43.194459 kernel: kvm_amd: TSC scaling supported Feb 13 15:24:43.194527 kernel: kvm_amd: Nested Virtualization enabled Feb 13 15:24:43.194562 kernel: kvm_amd: Nested Paging enabled Feb 13 15:24:43.194602 kernel: kvm_amd: LBR virtualization supported Feb 13 15:24:43.195811 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 15:24:43.195845 kernel: kvm_amd: Virtual GIF supported Feb 13 15:24:43.239678 kernel: EDAC MC: Ver: 3.0.0 Feb 13 15:24:43.251208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:24:43.275280 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:24:43.287945 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:24:43.296356 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:43.328140 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:24:43.330750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:24:43.331910 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:24:43.333108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:24:43.334373 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:24:43.335936 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:24:43.337171 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:24:43.338487 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:24:43.339920 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:24:43.339959 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:24:43.340872 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:24:43.342798 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:24:43.345730 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:24:43.353228 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:24:43.355563 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:24:43.357179 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:24:43.358360 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:24:43.359349 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:24:43.360508 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:43.360533 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:24:43.361503 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:24:43.363522 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:24:43.368899 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:24:43.367863 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:24:43.371833 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:24:43.374733 jq[1476]: false Feb 13 15:24:43.373704 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:24:43.376775 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:24:43.381921 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:24:43.385912 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:24:43.394993 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:24:43.396804 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:24:43.397376 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:24:43.398828 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:24:43.399666 dbus-daemon[1475]: [system] SELinux support is enabled Feb 13 15:24:43.401048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:24:43.403163 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:24:43.406258 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:24:43.409882 extend-filesystems[1477]: Found loop3 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found loop4 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found loop5 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found sr0 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda1 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda2 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda3 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found usr Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda4 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda6 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda7 Feb 13 15:24:43.410892 extend-filesystems[1477]: Found vda9 Feb 13 15:24:43.410892 extend-filesystems[1477]: Checking size of /dev/vda9 Feb 13 15:24:43.424689 jq[1490]: true Feb 13 15:24:43.418676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:24:43.418927 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:24:43.419380 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:24:43.419596 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:24:43.420178 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:24:43.420382 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:24:43.439937 extend-filesystems[1477]: Resized partition /dev/vda9 Feb 13 15:24:43.443171 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:24:43.448326 jq[1495]: true Feb 13 15:24:43.458654 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1406) Feb 13 15:24:43.460668 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:24:43.464906 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:24:43.482998 update_engine[1488]: I20250213 15:24:43.482913 1488 main.cc:92] Flatcar Update Engine starting Feb 13 15:24:43.485421 update_engine[1488]: I20250213 15:24:43.485391 1488 update_check_scheduler.cc:74] Next update check in 11m13s Feb 13 15:24:43.486593 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:24:43.489670 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:24:43.490159 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:24:43.490197 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:24:43.491686 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:24:43.491707 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:24:43.508882 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:24:43.579300 systemd-logind[1487]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 15:24:43.579589 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 15:24:43.581489 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:24:43.581489 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:24:43.581489 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:24:43.596221 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Feb 13 15:24:43.583269 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:24:43.583504 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:24:43.583858 systemd-logind[1487]: New seat seat0. Feb 13 15:24:43.588014 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:24:43.599431 bash[1524]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:24:43.599893 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:24:43.602409 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:24:43.605003 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:24:43.795676 sshd_keygen[1494]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:24:43.848971 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:24:43.865132 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:24:43.872883 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:24:43.873135 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:24:43.880980 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:24:43.926661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:24:43.939106 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:24:43.940650 containerd[1497]: time="2025-02-13T15:24:43.940570087Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:24:43.942979 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 15:24:43.944833 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:24:43.956756 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:24:43.959876 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:49954.service - OpenSSH per-connection server daemon (10.0.0.1:49954). Feb 13 15:24:43.969539 containerd[1497]: time="2025-02-13T15:24:43.969325904Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971318 containerd[1497]: time="2025-02-13T15:24:43.971257927Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971318 containerd[1497]: time="2025-02-13T15:24:43.971304364Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:24:43.971318 containerd[1497]: time="2025-02-13T15:24:43.971325464Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:24:43.971572 containerd[1497]: time="2025-02-13T15:24:43.971547390Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:24:43.971572 containerd[1497]: time="2025-02-13T15:24:43.971568840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971704 containerd[1497]: time="2025-02-13T15:24:43.971676512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971704 containerd[1497]: time="2025-02-13T15:24:43.971694516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971935 containerd[1497]: time="2025-02-13T15:24:43.971909068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971935 containerd[1497]: time="2025-02-13T15:24:43.971927974Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971985 containerd[1497]: time="2025-02-13T15:24:43.971941389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:43.971985 containerd[1497]: time="2025-02-13T15:24:43.971953552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.972078 containerd[1497]: time="2025-02-13T15:24:43.972060202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.972362 containerd[1497]: time="2025-02-13T15:24:43.972331501Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:24:43.972517 containerd[1497]: time="2025-02-13T15:24:43.972488224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:24:43.972517 containerd[1497]: time="2025-02-13T15:24:43.972512169Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:24:43.972706 containerd[1497]: time="2025-02-13T15:24:43.972677419Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:24:43.972781 containerd[1497]: time="2025-02-13T15:24:43.972762058Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:24:44.056013 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 49954 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:44.058056 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:44.061363 containerd[1497]: time="2025-02-13T15:24:44.061302904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:24:44.061462 containerd[1497]: time="2025-02-13T15:24:44.061373316Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:24:44.061462 containerd[1497]: time="2025-02-13T15:24:44.061390308Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:24:44.061462 containerd[1497]: time="2025-02-13T15:24:44.061407039Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:24:44.061462 containerd[1497]: time="2025-02-13T15:24:44.061420384Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:24:44.061627 containerd[1497]: time="2025-02-13T15:24:44.061603277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:24:44.061997 containerd[1497]: time="2025-02-13T15:24:44.061955257Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:24:44.062245 containerd[1497]: time="2025-02-13T15:24:44.062213752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:24:44.062245 containerd[1497]: time="2025-02-13T15:24:44.062239119Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:24:44.062310 containerd[1497]: time="2025-02-13T15:24:44.062259347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:24:44.062310 containerd[1497]: time="2025-02-13T15:24:44.062279756Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062310 containerd[1497]: time="2025-02-13T15:24:44.062297128Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062400 containerd[1497]: time="2025-02-13T15:24:44.062314501Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062400 containerd[1497]: time="2025-02-13T15:24:44.062333957Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062400 containerd[1497]: time="2025-02-13T15:24:44.062352181Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062400 containerd[1497]: time="2025-02-13T15:24:44.062369163Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062400 containerd[1497]: time="2025-02-13T15:24:44.062386215Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062401774Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062438573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062459423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062477006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062494368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062523 containerd[1497]: time="2025-02-13T15:24:44.062512121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062530416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062546165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062564239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062582774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062602912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062618591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062718288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062740109Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062758473Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062787077Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062805431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.062850 containerd[1497]: time="2025-02-13T15:24:44.062819348Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062887545Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062912683Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062927581Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062944282Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062957326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062976132Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.062990979Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:24:44.063144 containerd[1497]: time="2025-02-13T15:24:44.063004515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:24:44.063475 containerd[1497]: time="2025-02-13T15:24:44.063415866Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:24:44.063475 containerd[1497]: time="2025-02-13T15:24:44.063480798Z" level=info msg="Connect containerd service" Feb 13 15:24:44.063793 containerd[1497]: time="2025-02-13T15:24:44.063530341Z" level=info msg="using legacy CRI server" Feb 13 15:24:44.063793 containerd[1497]: time="2025-02-13T15:24:44.063540179Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:24:44.063793 containerd[1497]: time="2025-02-13T15:24:44.063687055Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:24:44.065671 containerd[1497]: time="2025-02-13T15:24:44.065600283Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:24:44.065860 containerd[1497]: time="2025-02-13T15:24:44.065793265Z" level=info msg="Start subscribing containerd event" Feb 13 15:24:44.065961 containerd[1497]: time="2025-02-13T15:24:44.065880849Z" level=info msg="Start recovering state" Feb 13 15:24:44.066028 containerd[1497]: time="2025-02-13T15:24:44.065987960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:24:44.066085 containerd[1497]: time="2025-02-13T15:24:44.065991276Z" level=info msg="Start event monitor" Feb 13 15:24:44.066085 containerd[1497]: time="2025-02-13T15:24:44.066046259Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:24:44.066085 containerd[1497]: time="2025-02-13T15:24:44.066054835Z" level=info msg="Start snapshots syncer" Feb 13 15:24:44.066085 containerd[1497]: time="2025-02-13T15:24:44.066079371Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:24:44.066211 containerd[1497]: time="2025-02-13T15:24:44.066090182Z" level=info msg="Start streaming server" Feb 13 15:24:44.066211 containerd[1497]: time="2025-02-13T15:24:44.066199867Z" level=info msg="containerd successfully booted in 0.126639s" Feb 13 15:24:44.067088 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:24:44.072059 systemd-logind[1487]: New session 1 of user core. Feb 13 15:24:44.072484 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:24:44.086285 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:24:44.088827 systemd-networkd[1419]: eth0: Gained IPv6LL Feb 13 15:24:44.092135 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:24:44.094093 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:24:44.105954 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:24:44.109503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:44.112287 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:24:44.116231 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:24:44.125035 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:24:44.135939 (systemd)[1572]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:24:44.139961 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:24:44.140255 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:24:44.142091 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:24:44.145024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:24:44.261149 systemd[1572]: Queued start job for default target default.target. Feb 13 15:24:44.270005 systemd[1572]: Created slice app.slice - User Application Slice. Feb 13 15:24:44.270033 systemd[1572]: Reached target paths.target - Paths. Feb 13 15:24:44.270046 systemd[1572]: Reached target timers.target - Timers. Feb 13 15:24:44.271717 systemd[1572]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:24:44.285532 systemd[1572]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:24:44.285718 systemd[1572]: Reached target sockets.target - Sockets. Feb 13 15:24:44.285743 systemd[1572]: Reached target basic.target - Basic System. Feb 13 15:24:44.285792 systemd[1572]: Reached target default.target - Main User Target. Feb 13 15:24:44.285835 systemd[1572]: Startup finished in 136ms. Feb 13 15:24:44.285946 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:24:44.288965 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:24:44.342823 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:49962.service - OpenSSH per-connection server daemon (10.0.0.1:49962). Feb 13 15:24:44.396717 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 49962 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:44.398258 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:44.403434 systemd-logind[1487]: New session 2 of user core. Feb 13 15:24:44.416844 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:24:44.478870 sshd[1594]: Connection closed by 10.0.0.1 port 49962 Feb 13 15:24:44.479693 sshd-session[1592]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:44.525910 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:49962.service: Deactivated successfully. Feb 13 15:24:44.527782 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:24:44.530027 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:24:44.534914 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:51128.service - OpenSSH per-connection server daemon (10.0.0.1:51128). Feb 13 15:24:44.537392 systemd-logind[1487]: Removed session 2. Feb 13 15:24:44.572783 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 51128 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:44.574627 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:44.579299 systemd-logind[1487]: New session 3 of user core. Feb 13 15:24:44.597900 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:24:44.673013 sshd[1601]: Connection closed by 10.0.0.1 port 51128 Feb 13 15:24:44.673366 sshd-session[1599]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:44.677645 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:51128.service: Deactivated successfully. Feb 13 15:24:44.682577 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:24:44.683281 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:24:44.684147 systemd-logind[1487]: Removed session 3. Feb 13 15:24:45.244010 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:45.245735 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:24:45.247100 systemd[1]: Startup finished in 723ms (kernel) + 5.314s (initrd) + 4.331s (userspace) = 10.369s. Feb 13 15:24:45.249254 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:24:45.889928 kubelet[1610]: E0213 15:24:45.889799 1610 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:24:45.894277 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:24:45.894479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:24:45.894885 systemd[1]: kubelet.service: Consumed 1.652s CPU time. Feb 13 15:24:54.686152 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:59524.service - OpenSSH per-connection server daemon (10.0.0.1:59524). Feb 13 15:24:54.726101 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 59524 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:54.727536 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:54.731348 systemd-logind[1487]: New session 4 of user core. Feb 13 15:24:54.741767 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:24:54.794517 sshd[1626]: Connection closed by 10.0.0.1 port 59524 Feb 13 15:24:54.794926 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:54.806668 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:59524.service: Deactivated successfully. Feb 13 15:24:54.808735 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:24:54.810350 systemd-logind[1487]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:24:54.811686 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:59532.service - OpenSSH per-connection server daemon (10.0.0.1:59532). Feb 13 15:24:54.812433 systemd-logind[1487]: Removed session 4. Feb 13 15:24:54.853546 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 59532 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:54.855358 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:54.859466 systemd-logind[1487]: New session 5 of user core. Feb 13 15:24:54.874790 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:24:54.923418 sshd[1633]: Connection closed by 10.0.0.1 port 59532 Feb 13 15:24:54.923828 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:54.934628 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:59532.service: Deactivated successfully. Feb 13 15:24:54.936323 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:24:54.938125 systemd-logind[1487]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:24:54.951984 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:59536.service - OpenSSH per-connection server daemon (10.0.0.1:59536). Feb 13 15:24:54.952889 systemd-logind[1487]: Removed session 5. Feb 13 15:24:54.988417 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 59536 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:54.990069 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:54.993898 systemd-logind[1487]: New session 6 of user core. Feb 13 15:24:55.009796 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:24:55.063934 sshd[1640]: Connection closed by 10.0.0.1 port 59536 Feb 13 15:24:55.064277 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:55.072577 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:59536.service: Deactivated successfully. Feb 13 15:24:55.074360 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:24:55.076219 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:24:55.077445 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:59550.service - OpenSSH per-connection server daemon (10.0.0.1:59550). Feb 13 15:24:55.078409 systemd-logind[1487]: Removed session 6. Feb 13 15:24:55.129650 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 59550 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:55.131119 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:55.134893 systemd-logind[1487]: New session 7 of user core. Feb 13 15:24:55.148782 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:24:55.209587 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:24:55.210028 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:55.226427 sudo[1648]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:55.228104 sshd[1647]: Connection closed by 10.0.0.1 port 59550 Feb 13 15:24:55.228465 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:55.248194 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:59550.service: Deactivated successfully. Feb 13 15:24:55.249892 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:24:55.251769 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:24:55.266206 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:59552.service - OpenSSH per-connection server daemon (10.0.0.1:59552). Feb 13 15:24:55.267169 systemd-logind[1487]: Removed session 7. Feb 13 15:24:55.304188 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 59552 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:55.305682 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:55.309487 systemd-logind[1487]: New session 8 of user core. Feb 13 15:24:55.324858 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 15:24:55.378915 sudo[1657]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:24:55.379245 sudo[1657]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:55.383087 sudo[1657]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:55.389401 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:24:55.389783 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:55.409892 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:24:55.438807 augenrules[1679]: No rules Feb 13 15:24:55.440709 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:24:55.440964 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:24:55.442146 sudo[1656]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:55.443594 sshd[1655]: Connection closed by 10.0.0.1 port 59552 Feb 13 15:24:55.443958 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:55.455206 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:59552.service: Deactivated successfully. Feb 13 15:24:55.456817 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 15:24:55.458298 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Feb 13 15:24:55.459467 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:59554.service - OpenSSH per-connection server daemon (10.0.0.1:59554). Feb 13 15:24:55.460201 systemd-logind[1487]: Removed session 8. Feb 13 15:24:55.499804 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:LuUo/H8l6HTiusWzlGV41X7Ei0zLr8Ig/b+bY+ekGqE Feb 13 15:24:55.501174 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:24:55.504938 systemd-logind[1487]: New session 9 of user core. Feb 13 15:24:55.513755 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 15:24:55.565808 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:24:55.566217 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:24:55.589983 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:24:55.608545 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:24:55.608827 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:24:56.144809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 15:24:56.157868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:56.355073 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:24:56.355171 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:24:56.355475 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:56.357591 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:56.376857 systemd[1]: Reloading requested from client PID 1744 ('systemctl') (unit session-9.scope)... Feb 13 15:24:56.376872 systemd[1]: Reloading... Feb 13 15:24:56.460664 zram_generator::config[1785]: No configuration found. Feb 13 15:24:56.899544 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:24:56.976353 systemd[1]: Reloading finished in 599 ms. Feb 13 15:24:57.033054 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:24:57.033171 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:24:57.033515 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:57.036296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:24:57.190321 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:24:57.194875 (kubelet)[1831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:24:57.232482 kubelet[1831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:24:57.232482 kubelet[1831]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:24:57.232482 kubelet[1831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:24:57.233451 kubelet[1831]: I0213 15:24:57.233401 1831 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:24:57.428367 kubelet[1831]: I0213 15:24:57.428329 1831 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 15:24:57.428367 kubelet[1831]: I0213 15:24:57.428355 1831 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:24:57.428574 kubelet[1831]: I0213 15:24:57.428563 1831 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 15:24:57.446103 kubelet[1831]: I0213 15:24:57.445688 1831 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:24:57.460372 kubelet[1831]: I0213 15:24:57.460330 1831 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:24:57.461419 kubelet[1831]: I0213 15:24:57.461378 1831 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:24:57.461610 kubelet[1831]: I0213 15:24:57.461410 1831 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:24:57.461722 kubelet[1831]: I0213 15:24:57.461626 1831 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:24:57.461722 kubelet[1831]: I0213 15:24:57.461648 1831 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:24:57.461821 kubelet[1831]: I0213 15:24:57.461796 1831 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:24:57.462426 kubelet[1831]: I0213 15:24:57.462400 1831 kubelet.go:400] "Attempting to sync node with API server" Feb 13 15:24:57.462426 kubelet[1831]: I0213 15:24:57.462419 1831 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:24:57.462467 kubelet[1831]: I0213 15:24:57.462446 1831 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:24:57.462467 kubelet[1831]: I0213 15:24:57.462462 1831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:24:57.462902 kubelet[1831]: E0213 15:24:57.462675 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:57.462902 kubelet[1831]: E0213 15:24:57.462738 1831 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:57.465748 kubelet[1831]: I0213 15:24:57.465711 1831 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:24:57.466753 kubelet[1831]: W0213 15:24:57.466729 1831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.50" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:24:57.466809 kubelet[1831]: E0213 15:24:57.466769 1831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.50" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:24:57.466809 kubelet[1831]: W0213 15:24:57.466764 1831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:24:57.466858 kubelet[1831]: E0213 15:24:57.466810 1831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:24:57.467058 kubelet[1831]: I0213 15:24:57.467027 1831 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:24:57.467110 kubelet[1831]: W0213 15:24:57.467091 1831 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:24:57.467766 kubelet[1831]: I0213 15:24:57.467749 1831 server.go:1264] "Started kubelet" Feb 13 15:24:57.469300 kubelet[1831]: I0213 15:24:57.467810 1831 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:24:57.469300 kubelet[1831]: I0213 15:24:57.468019 1831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:24:57.469300 kubelet[1831]: I0213 15:24:57.468437 1831 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:24:57.469300 kubelet[1831]: I0213 15:24:57.468811 1831 server.go:455] "Adding debug handlers to kubelet server" Feb 13 15:24:57.469300 kubelet[1831]: I0213 15:24:57.469164 1831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:24:57.472213 kubelet[1831]: I0213 15:24:57.471742 1831 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:24:57.472213 kubelet[1831]: I0213 15:24:57.471822 1831 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 15:24:57.472213 kubelet[1831]: I0213 15:24:57.471903 1831 reconciler.go:26] "Reconciler: start to sync state" Feb 13 15:24:57.473940 kubelet[1831]: I0213 15:24:57.473911 1831 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:24:57.474150 kubelet[1831]: I0213 15:24:57.474025 1831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:24:57.474734 kubelet[1831]: E0213 15:24:57.474623 1831 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:24:57.475107 kubelet[1831]: I0213 15:24:57.475090 1831 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:24:57.482329 kubelet[1831]: E0213 15:24:57.482206 1831 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.1823cdecab75909a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-02-13 15:24:57.467728026 +0000 UTC m=+0.269224092,LastTimestamp:2025-02-13 15:24:57.467728026 +0000 UTC m=+0.269224092,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" Feb 13 15:24:57.482464 kubelet[1831]: W0213 15:24:57.482406 1831 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:24:57.482464 kubelet[1831]: E0213 15:24:57.482446 1831 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:24:57.482532 kubelet[1831]: E0213 15:24:57.482508 1831 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.50\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:24:57.484789 kubelet[1831]: E0213 15:24:57.484552 1831 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.1823cdecabde6d66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-02-13 15:24:57.474600294 +0000 UTC m=+0.276096390,LastTimestamp:2025-02-13 15:24:57.474600294 +0000 UTC m=+0.276096390,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" Feb 13 15:24:57.488281 kubelet[1831]: I0213 15:24:57.488258 1831 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:24:57.488281 kubelet[1831]: I0213 15:24:57.488279 1831 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:24:57.488508 kubelet[1831]: I0213 15:24:57.488309 1831 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:24:57.491839 kubelet[1831]: E0213 15:24:57.491670 1831 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.1823cdecaca698c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.50 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-02-13 15:24:57.487718596 +0000 UTC m=+0.289214672,LastTimestamp:2025-02-13 15:24:57.487718596 +0000 UTC m=+0.289214672,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" Feb 13 15:24:57.496089 kubelet[1831]: E0213 15:24:57.495940 1831 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.1823cdecaca6ca09 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.50 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-02-13 15:24:57.487731209 +0000 UTC m=+0.289227285,LastTimestamp:2025-02-13 15:24:57.487731209 +0000 UTC m=+0.289227285,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" Feb 13 15:24:57.500053 kubelet[1831]: E0213 15:24:57.499937 1831 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.1823cdecaca6dbe2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.50 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-02-13 15:24:57.487735778 +0000 UTC m=+0.289231854,LastTimestamp:2025-02-13 15:24:57.487735778 +0000 UTC m=+0.289231854,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" Feb 13 15:24:57.572914 kubelet[1831]: I0213 15:24:57.572837 1831 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.50" Feb 13 15:24:57.798227 kubelet[1831]: I0213 15:24:57.798109 1831 policy_none.go:49] "None policy: Start" Feb 13 15:24:57.799441 kubelet[1831]: I0213 15:24:57.798822 1831 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:24:57.799441 kubelet[1831]: I0213 15:24:57.798852 1831 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:24:57.801105 kubelet[1831]: I0213 15:24:57.801070 1831 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.50" Feb 13 15:24:57.802818 kubelet[1831]: I0213 15:24:57.802800 1831 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:24:57.803117 containerd[1497]: time="2025-02-13T15:24:57.803080011Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:24:57.803412 kubelet[1831]: I0213 15:24:57.803245 1831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:24:57.807466 sudo[1690]: pam_unix(sudo:session): session closed for user root Feb 13 15:24:57.808788 sshd[1689]: Connection closed by 10.0.0.1 port 59554 Feb 13 15:24:57.809198 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Feb 13 15:24:57.809442 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:24:57.815149 kubelet[1831]: E0213 15:24:57.812024 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:57.815249 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:59554.service: Deactivated successfully. Feb 13 15:24:57.817413 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 15:24:57.819011 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Feb 13 15:24:57.820861 systemd-logind[1487]: Removed session 9. Feb 13 15:24:57.823187 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:24:57.826447 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:24:57.829284 kubelet[1831]: I0213 15:24:57.829246 1831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:24:57.830422 kubelet[1831]: I0213 15:24:57.830399 1831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:24:57.830459 kubelet[1831]: I0213 15:24:57.830431 1831 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:24:57.830459 kubelet[1831]: I0213 15:24:57.830449 1831 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 15:24:57.830806 kubelet[1831]: E0213 15:24:57.830493 1831 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:24:57.831715 kubelet[1831]: I0213 15:24:57.831566 1831 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:24:57.832998 kubelet[1831]: I0213 15:24:57.832955 1831 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 15:24:57.833185 kubelet[1831]: I0213 15:24:57.833100 1831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:24:57.834730 kubelet[1831]: E0213 15:24:57.834719 1831 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.50\" not found" Feb 13 15:24:57.913010 kubelet[1831]: E0213 15:24:57.912970 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.013948 kubelet[1831]: E0213 15:24:58.013892 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.114622 kubelet[1831]: E0213 15:24:58.114503 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.215189 kubelet[1831]: E0213 15:24:58.215125 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.315920 kubelet[1831]: E0213 15:24:58.315808 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.416871 kubelet[1831]: E0213 15:24:58.416708 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:58.429858 kubelet[1831]: I0213 15:24:58.429792 1831 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:24:58.430004 kubelet[1831]: W0213 15:24:58.429982 1831 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:24:58.463357 kubelet[1831]: E0213 15:24:58.463276 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:58.517573 kubelet[1831]: E0213 15:24:58.517524 1831 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.50\" not found" Feb 13 15:24:59.463498 kubelet[1831]: I0213 15:24:59.463457 1831 apiserver.go:52] "Watching apiserver" Feb 13 15:24:59.463498 kubelet[1831]: E0213 15:24:59.463472 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:24:59.466647 kubelet[1831]: I0213 15:24:59.466593 1831 topology_manager.go:215] "Topology Admit Handler" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" podNamespace="kube-system" podName="cilium-l24f7" Feb 13 15:24:59.467015 kubelet[1831]: I0213 15:24:59.466997 1831 topology_manager.go:215] "Topology Admit Handler" podUID="8e8e505c-ed18-47c1-a830-a52c95d04347" podNamespace="kube-system" podName="kube-proxy-h7mz7" Feb 13 15:24:59.472229 kubelet[1831]: I0213 15:24:59.472211 1831 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 15:24:59.472575 systemd[1]: Created slice kubepods-burstable-pod46e717eb_f65e_4caa_9018_ab5aeda7cf31.slice - libcontainer container kubepods-burstable-pod46e717eb_f65e_4caa_9018_ab5aeda7cf31.slice. Feb 13 15:24:59.486268 kubelet[1831]: I0213 15:24:59.486240 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47n7\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-kube-api-access-t47n7\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486268 kubelet[1831]: I0213 15:24:59.486267 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8e8e505c-ed18-47c1-a830-a52c95d04347-kube-proxy\") pod \"kube-proxy-h7mz7\" (UID: \"8e8e505c-ed18-47c1-a830-a52c95d04347\") " pod="kube-system/kube-proxy-h7mz7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486285 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57bz5\" (UniqueName: \"kubernetes.io/projected/8e8e505c-ed18-47c1-a830-a52c95d04347-kube-api-access-57bz5\") pod \"kube-proxy-h7mz7\" (UID: \"8e8e505c-ed18-47c1-a830-a52c95d04347\") " pod="kube-system/kube-proxy-h7mz7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486301 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-config-path\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486316 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-kernel\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486330 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hubble-tls\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486343 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hostproc\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486361 kubelet[1831]: I0213 15:24:59.486357 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cni-path\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486483 kubelet[1831]: I0213 15:24:59.486416 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-etc-cni-netd\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486483 kubelet[1831]: I0213 15:24:59.486442 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-xtables-lock\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486483 kubelet[1831]: I0213 15:24:59.486463 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-run\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486543 kubelet[1831]: I0213 15:24:59.486489 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-lib-modules\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486543 kubelet[1831]: I0213 15:24:59.486508 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46e717eb-f65e-4caa-9018-ab5aeda7cf31-clustermesh-secrets\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486543 kubelet[1831]: I0213 15:24:59.486526 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-net\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486600 kubelet[1831]: I0213 15:24:59.486542 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-bpf-maps\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486600 kubelet[1831]: I0213 15:24:59.486560 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-cgroup\") pod \"cilium-l24f7\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " pod="kube-system/cilium-l24f7" Feb 13 15:24:59.486600 kubelet[1831]: I0213 15:24:59.486582 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e8e505c-ed18-47c1-a830-a52c95d04347-xtables-lock\") pod \"kube-proxy-h7mz7\" (UID: \"8e8e505c-ed18-47c1-a830-a52c95d04347\") " pod="kube-system/kube-proxy-h7mz7" Feb 13 15:24:59.486678 kubelet[1831]: I0213 15:24:59.486607 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e8e505c-ed18-47c1-a830-a52c95d04347-lib-modules\") pod \"kube-proxy-h7mz7\" (UID: \"8e8e505c-ed18-47c1-a830-a52c95d04347\") " pod="kube-system/kube-proxy-h7mz7" Feb 13 15:24:59.489096 systemd[1]: Created slice kubepods-besteffort-pod8e8e505c_ed18_47c1_a830_a52c95d04347.slice - libcontainer container kubepods-besteffort-pod8e8e505c_ed18_47c1_a830_a52c95d04347.slice. Feb 13 15:24:59.790065 kubelet[1831]: E0213 15:24:59.789927 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:59.790619 containerd[1497]: time="2025-02-13T15:24:59.790564935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l24f7,Uid:46e717eb-f65e-4caa-9018-ab5aeda7cf31,Namespace:kube-system,Attempt:0,}" Feb 13 15:24:59.807124 kubelet[1831]: E0213 15:24:59.807101 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:24:59.807571 containerd[1497]: time="2025-02-13T15:24:59.807417209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7mz7,Uid:8e8e505c-ed18-47c1-a830-a52c95d04347,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:00.463999 kubelet[1831]: E0213 15:25:00.463944 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:00.607134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1665367515.mount: Deactivated successfully. Feb 13 15:25:00.615246 containerd[1497]: time="2025-02-13T15:25:00.615191544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:00.616916 containerd[1497]: time="2025-02-13T15:25:00.616879910Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 15:25:00.617824 containerd[1497]: time="2025-02-13T15:25:00.617794646Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:00.618900 containerd[1497]: time="2025-02-13T15:25:00.618863791Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:00.619493 containerd[1497]: time="2025-02-13T15:25:00.619450090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:25:00.621702 containerd[1497]: time="2025-02-13T15:25:00.621656638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:25:00.622627 containerd[1497]: time="2025-02-13T15:25:00.622591151Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 831.838223ms" Feb 13 15:25:00.623857 containerd[1497]: time="2025-02-13T15:25:00.623823923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 816.335601ms" Feb 13 15:25:00.723137 containerd[1497]: time="2025-02-13T15:25:00.722692837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:00.723137 containerd[1497]: time="2025-02-13T15:25:00.722753170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:00.723137 containerd[1497]: time="2025-02-13T15:25:00.722765784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:00.723137 containerd[1497]: time="2025-02-13T15:25:00.722856284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:00.724219 containerd[1497]: time="2025-02-13T15:25:00.721466297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:00.724219 containerd[1497]: time="2025-02-13T15:25:00.724114072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:00.724219 containerd[1497]: time="2025-02-13T15:25:00.724126205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:00.724829 containerd[1497]: time="2025-02-13T15:25:00.724760855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:00.790804 systemd[1]: Started cri-containerd-666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395.scope - libcontainer container 666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395. Feb 13 15:25:00.792510 systemd[1]: Started cri-containerd-9a258e350ffbcacf08a5df0f834274f0f7edb1e62699554f9ee2956dfa0ccc6d.scope - libcontainer container 9a258e350ffbcacf08a5df0f834274f0f7edb1e62699554f9ee2956dfa0ccc6d. Feb 13 15:25:00.814305 containerd[1497]: time="2025-02-13T15:25:00.814252504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h7mz7,Uid:8e8e505c-ed18-47c1-a830-a52c95d04347,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a258e350ffbcacf08a5df0f834274f0f7edb1e62699554f9ee2956dfa0ccc6d\"" Feb 13 15:25:00.814704 containerd[1497]: time="2025-02-13T15:25:00.814513083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l24f7,Uid:46e717eb-f65e-4caa-9018-ab5aeda7cf31,Namespace:kube-system,Attempt:0,} returns sandbox id \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\"" Feb 13 15:25:00.815972 kubelet[1831]: E0213 15:25:00.815624 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:00.815972 kubelet[1831]: E0213 15:25:00.815715 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:00.816889 containerd[1497]: time="2025-02-13T15:25:00.816865815Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:25:01.464145 kubelet[1831]: E0213 15:25:01.464095 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:02.464519 kubelet[1831]: E0213 15:25:02.464457 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:03.465152 kubelet[1831]: E0213 15:25:03.465069 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:04.374378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount271782373.mount: Deactivated successfully. Feb 13 15:25:04.466046 kubelet[1831]: E0213 15:25:04.466015 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:05.466650 kubelet[1831]: E0213 15:25:05.466607 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:06.466858 kubelet[1831]: E0213 15:25:06.466815 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:06.881260 containerd[1497]: time="2025-02-13T15:25:06.881109457Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:06.881932 containerd[1497]: time="2025-02-13T15:25:06.881856678Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 15:25:06.883307 containerd[1497]: time="2025-02-13T15:25:06.883262164Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:06.885148 containerd[1497]: time="2025-02-13T15:25:06.885099309Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.068203958s" Feb 13 15:25:06.885148 containerd[1497]: time="2025-02-13T15:25:06.885139114Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 15:25:06.886353 containerd[1497]: time="2025-02-13T15:25:06.886309939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 15:25:06.888032 containerd[1497]: time="2025-02-13T15:25:06.888005960Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:25:06.906782 containerd[1497]: time="2025-02-13T15:25:06.906732499Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\"" Feb 13 15:25:06.907547 containerd[1497]: time="2025-02-13T15:25:06.907497243Z" level=info msg="StartContainer for \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\"" Feb 13 15:25:06.937832 systemd[1]: Started cri-containerd-3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062.scope - libcontainer container 3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062. Feb 13 15:25:06.965656 containerd[1497]: time="2025-02-13T15:25:06.965578656Z" level=info msg="StartContainer for \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\" returns successfully" Feb 13 15:25:06.975269 systemd[1]: cri-containerd-3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062.scope: Deactivated successfully. Feb 13 15:25:07.392524 containerd[1497]: time="2025-02-13T15:25:07.392459340Z" level=info msg="shim disconnected" id=3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062 namespace=k8s.io Feb 13 15:25:07.392524 containerd[1497]: time="2025-02-13T15:25:07.392517229Z" level=warning msg="cleaning up after shim disconnected" id=3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062 namespace=k8s.io Feb 13 15:25:07.392524 containerd[1497]: time="2025-02-13T15:25:07.392526466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:07.467645 kubelet[1831]: E0213 15:25:07.467577 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:07.850766 kubelet[1831]: E0213 15:25:07.850418 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:07.855624 containerd[1497]: time="2025-02-13T15:25:07.855558116Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:25:07.899055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062-rootfs.mount: Deactivated successfully. Feb 13 15:25:07.912151 containerd[1497]: time="2025-02-13T15:25:07.912099932Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\"" Feb 13 15:25:07.912831 containerd[1497]: time="2025-02-13T15:25:07.912788182Z" level=info msg="StartContainer for \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\"" Feb 13 15:25:07.960775 systemd[1]: Started cri-containerd-3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b.scope - libcontainer container 3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b. Feb 13 15:25:08.016107 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:25:08.016630 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:25:08.016730 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:25:08.024903 containerd[1497]: time="2025-02-13T15:25:08.024852363Z" level=info msg="StartContainer for \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\" returns successfully" Feb 13 15:25:08.026179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:25:08.026502 systemd[1]: cri-containerd-3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b.scope: Deactivated successfully. Feb 13 15:25:08.047483 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:25:08.065413 containerd[1497]: time="2025-02-13T15:25:08.065317008Z" level=info msg="shim disconnected" id=3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b namespace=k8s.io Feb 13 15:25:08.065723 containerd[1497]: time="2025-02-13T15:25:08.065406216Z" level=warning msg="cleaning up after shim disconnected" id=3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b namespace=k8s.io Feb 13 15:25:08.065723 containerd[1497]: time="2025-02-13T15:25:08.065576245Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:08.468387 kubelet[1831]: E0213 15:25:08.468334 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:08.853627 kubelet[1831]: E0213 15:25:08.853336 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:08.855132 containerd[1497]: time="2025-02-13T15:25:08.855099492Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:25:09.089233 containerd[1497]: time="2025-02-13T15:25:09.089145011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:09.192105 containerd[1497]: time="2025-02-13T15:25:09.191979232Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 15:25:09.278134 containerd[1497]: time="2025-02-13T15:25:09.278093147Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:09.318164 containerd[1497]: time="2025-02-13T15:25:09.318119460Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\"" Feb 13 15:25:09.318510 containerd[1497]: time="2025-02-13T15:25:09.318488152Z" level=info msg="StartContainer for \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\"" Feb 13 15:25:09.319091 containerd[1497]: time="2025-02-13T15:25:09.318921955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:09.319438 containerd[1497]: time="2025-02-13T15:25:09.319402066Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 2.433058544s" Feb 13 15:25:09.319438 containerd[1497]: time="2025-02-13T15:25:09.319431210Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 15:25:09.321316 containerd[1497]: time="2025-02-13T15:25:09.321290888Z" level=info msg="CreateContainer within sandbox \"9a258e350ffbcacf08a5df0f834274f0f7edb1e62699554f9ee2956dfa0ccc6d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:25:09.387884 systemd[1]: Started cri-containerd-55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85.scope - libcontainer container 55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85. Feb 13 15:25:09.421925 systemd[1]: cri-containerd-55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85.scope: Deactivated successfully. Feb 13 15:25:09.447280 containerd[1497]: time="2025-02-13T15:25:09.447159516Z" level=info msg="StartContainer for \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\" returns successfully" Feb 13 15:25:09.462455 containerd[1497]: time="2025-02-13T15:25:09.462391111Z" level=info msg="CreateContainer within sandbox \"9a258e350ffbcacf08a5df0f834274f0f7edb1e62699554f9ee2956dfa0ccc6d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9c09965348b752880f2ae8e5b8d45b5144dd3322a5e8ba2dc32de890571ef52\"" Feb 13 15:25:09.462981 containerd[1497]: time="2025-02-13T15:25:09.462926415Z" level=info msg="StartContainer for \"a9c09965348b752880f2ae8e5b8d45b5144dd3322a5e8ba2dc32de890571ef52\"" Feb 13 15:25:09.469514 kubelet[1831]: E0213 15:25:09.469468 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:09.513902 systemd[1]: Started cri-containerd-a9c09965348b752880f2ae8e5b8d45b5144dd3322a5e8ba2dc32de890571ef52.scope - libcontainer container a9c09965348b752880f2ae8e5b8d45b5144dd3322a5e8ba2dc32de890571ef52. Feb 13 15:25:09.806726 containerd[1497]: time="2025-02-13T15:25:09.806503715Z" level=info msg="StartContainer for \"a9c09965348b752880f2ae8e5b8d45b5144dd3322a5e8ba2dc32de890571ef52\" returns successfully" Feb 13 15:25:09.808096 containerd[1497]: time="2025-02-13T15:25:09.808005371Z" level=info msg="shim disconnected" id=55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85 namespace=k8s.io Feb 13 15:25:09.808096 containerd[1497]: time="2025-02-13T15:25:09.808085431Z" level=warning msg="cleaning up after shim disconnected" id=55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85 namespace=k8s.io Feb 13 15:25:09.808096 containerd[1497]: time="2025-02-13T15:25:09.808097414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:09.826960 containerd[1497]: time="2025-02-13T15:25:09.826887422Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:25:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:25:09.856368 kubelet[1831]: E0213 15:25:09.856322 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:09.858244 kubelet[1831]: E0213 15:25:09.858219 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:09.861474 containerd[1497]: time="2025-02-13T15:25:09.861358638Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:25:09.879960 containerd[1497]: time="2025-02-13T15:25:09.879914617Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\"" Feb 13 15:25:09.880527 containerd[1497]: time="2025-02-13T15:25:09.880491188Z" level=info msg="StartContainer for \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\"" Feb 13 15:25:09.883454 kubelet[1831]: I0213 15:25:09.883348 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h7mz7" podStartSLOduration=4.379632784 podStartE2EDuration="12.883325854s" podCreationTimestamp="2025-02-13 15:24:57 +0000 UTC" firstStartedPulling="2025-02-13 15:25:00.816547428 +0000 UTC m=+3.618043504" lastFinishedPulling="2025-02-13 15:25:09.320240498 +0000 UTC m=+12.121736574" observedRunningTime="2025-02-13 15:25:09.869303677 +0000 UTC m=+12.670799753" watchObservedRunningTime="2025-02-13 15:25:09.883325854 +0000 UTC m=+12.684821930" Feb 13 15:25:09.901424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85-rootfs.mount: Deactivated successfully. Feb 13 15:25:09.913856 systemd[1]: Started cri-containerd-4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833.scope - libcontainer container 4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833. Feb 13 15:25:09.941452 systemd[1]: cri-containerd-4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833.scope: Deactivated successfully. Feb 13 15:25:09.945365 containerd[1497]: time="2025-02-13T15:25:09.944878808Z" level=info msg="StartContainer for \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\" returns successfully" Feb 13 15:25:09.963031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833-rootfs.mount: Deactivated successfully. Feb 13 15:25:09.967945 containerd[1497]: time="2025-02-13T15:25:09.967881566Z" level=info msg="shim disconnected" id=4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833 namespace=k8s.io Feb 13 15:25:09.967945 containerd[1497]: time="2025-02-13T15:25:09.967943112Z" level=warning msg="cleaning up after shim disconnected" id=4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833 namespace=k8s.io Feb 13 15:25:09.968058 containerd[1497]: time="2025-02-13T15:25:09.967952098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:10.470666 kubelet[1831]: E0213 15:25:10.470586 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:10.862688 kubelet[1831]: E0213 15:25:10.862423 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:10.862688 kubelet[1831]: E0213 15:25:10.862431 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:10.864790 containerd[1497]: time="2025-02-13T15:25:10.864720038Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:25:10.884013 containerd[1497]: time="2025-02-13T15:25:10.883950112Z" level=info msg="CreateContainer within sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\"" Feb 13 15:25:10.884729 containerd[1497]: time="2025-02-13T15:25:10.884683838Z" level=info msg="StartContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\"" Feb 13 15:25:10.923887 systemd[1]: Started cri-containerd-bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06.scope - libcontainer container bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06. Feb 13 15:25:10.957906 containerd[1497]: time="2025-02-13T15:25:10.957843338Z" level=info msg="StartContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" returns successfully" Feb 13 15:25:11.050126 kubelet[1831]: I0213 15:25:11.050073 1831 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:25:11.450663 kernel: Initializing XFRM netlink socket Feb 13 15:25:11.471450 kubelet[1831]: E0213 15:25:11.471394 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:11.868119 kubelet[1831]: E0213 15:25:11.868003 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:12.472240 kubelet[1831]: E0213 15:25:12.472168 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:12.869072 kubelet[1831]: E0213 15:25:12.868969 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:13.187089 systemd-networkd[1419]: cilium_host: Link UP Feb 13 15:25:13.187321 systemd-networkd[1419]: cilium_net: Link UP Feb 13 15:25:13.188082 systemd-networkd[1419]: cilium_net: Gained carrier Feb 13 15:25:13.188281 systemd-networkd[1419]: cilium_host: Gained carrier Feb 13 15:25:13.292842 systemd-networkd[1419]: cilium_vxlan: Link UP Feb 13 15:25:13.292854 systemd-networkd[1419]: cilium_vxlan: Gained carrier Feb 13 15:25:13.472458 kubelet[1831]: E0213 15:25:13.472340 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:13.496675 kernel: NET: Registered PF_ALG protocol family Feb 13 15:25:13.783875 systemd-networkd[1419]: cilium_net: Gained IPv6LL Feb 13 15:25:13.871355 kubelet[1831]: E0213 15:25:13.871300 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:13.950061 kubelet[1831]: I0213 15:25:13.949982 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l24f7" podStartSLOduration=10.880316877 podStartE2EDuration="16.94996315s" podCreationTimestamp="2025-02-13 15:24:57 +0000 UTC" firstStartedPulling="2025-02-13 15:25:00.816485602 +0000 UTC m=+3.617981678" lastFinishedPulling="2025-02-13 15:25:06.886131875 +0000 UTC m=+9.687627951" observedRunningTime="2025-02-13 15:25:11.906302638 +0000 UTC m=+14.707798714" watchObservedRunningTime="2025-02-13 15:25:13.94996315 +0000 UTC m=+16.751459217" Feb 13 15:25:13.950274 kubelet[1831]: I0213 15:25:13.950193 1831 topology_manager.go:215] "Topology Admit Handler" podUID="3fb95972-9333-4709-bfcb-941065d54611" podNamespace="default" podName="nginx-deployment-85f456d6dd-6x4sz" Feb 13 15:25:13.956736 systemd[1]: Created slice kubepods-besteffort-pod3fb95972_9333_4709_bfcb_941065d54611.slice - libcontainer container kubepods-besteffort-pod3fb95972_9333_4709_bfcb_941065d54611.slice. Feb 13 15:25:13.976800 systemd-networkd[1419]: cilium_host: Gained IPv6LL Feb 13 15:25:13.980327 kubelet[1831]: I0213 15:25:13.980283 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2dnz\" (UniqueName: \"kubernetes.io/projected/3fb95972-9333-4709-bfcb-941065d54611-kube-api-access-q2dnz\") pod \"nginx-deployment-85f456d6dd-6x4sz\" (UID: \"3fb95972-9333-4709-bfcb-941065d54611\") " pod="default/nginx-deployment-85f456d6dd-6x4sz" Feb 13 15:25:14.221922 systemd-networkd[1419]: lxc_health: Link UP Feb 13 15:25:14.237540 systemd-networkd[1419]: lxc_health: Gained carrier Feb 13 15:25:14.260240 containerd[1497]: time="2025-02-13T15:25:14.260195547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-6x4sz,Uid:3fb95972-9333-4709-bfcb-941065d54611,Namespace:default,Attempt:0,}" Feb 13 15:25:14.571151 kubelet[1831]: E0213 15:25:14.570991 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:14.581889 systemd-networkd[1419]: lxc377ea214dc15: Link UP Feb 13 15:25:14.592790 kernel: eth0: renamed from tmpd4271 Feb 13 15:25:14.605054 systemd-networkd[1419]: lxc377ea214dc15: Gained carrier Feb 13 15:25:15.127919 systemd-networkd[1419]: cilium_vxlan: Gained IPv6LL Feb 13 15:25:15.571874 kubelet[1831]: E0213 15:25:15.571712 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:15.575769 systemd-networkd[1419]: lxc_health: Gained IPv6LL Feb 13 15:25:15.792070 kubelet[1831]: E0213 15:25:15.792031 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:15.895843 systemd-networkd[1419]: lxc377ea214dc15: Gained IPv6LL Feb 13 15:25:16.572328 kubelet[1831]: E0213 15:25:16.572219 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:17.462570 kubelet[1831]: E0213 15:25:17.462503 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:17.573528 kubelet[1831]: E0213 15:25:17.573466 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:18.184579 containerd[1497]: time="2025-02-13T15:25:18.184440721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:18.184579 containerd[1497]: time="2025-02-13T15:25:18.184525063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:18.184579 containerd[1497]: time="2025-02-13T15:25:18.184543437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:18.185059 containerd[1497]: time="2025-02-13T15:25:18.184629262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:18.213786 systemd[1]: Started cri-containerd-d42715f1c667e271d55cf6b68469f431a3c7b9a6a9f6afb8787ba409bcb17d75.scope - libcontainer container d42715f1c667e271d55cf6b68469f431a3c7b9a6a9f6afb8787ba409bcb17d75. Feb 13 15:25:18.225280 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:18.247882 containerd[1497]: time="2025-02-13T15:25:18.247814936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-6x4sz,Uid:3fb95972-9333-4709-bfcb-941065d54611,Namespace:default,Attempt:0,} returns sandbox id \"d42715f1c667e271d55cf6b68469f431a3c7b9a6a9f6afb8787ba409bcb17d75\"" Feb 13 15:25:18.249386 containerd[1497]: time="2025-02-13T15:25:18.249339126Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:25:18.574005 kubelet[1831]: E0213 15:25:18.573834 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:19.574078 kubelet[1831]: E0213 15:25:19.574005 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:20.312201 kubelet[1831]: I0213 15:25:20.312141 1831 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 15:25:20.313436 kubelet[1831]: E0213 15:25:20.313055 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:20.574651 kubelet[1831]: E0213 15:25:20.574521 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:20.884475 kubelet[1831]: E0213 15:25:20.884283 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:21.479146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3970093624.mount: Deactivated successfully. Feb 13 15:25:21.575458 kubelet[1831]: E0213 15:25:21.575367 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:22.576003 kubelet[1831]: E0213 15:25:22.575909 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:23.427174 containerd[1497]: time="2025-02-13T15:25:23.427084732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:23.427841 containerd[1497]: time="2025-02-13T15:25:23.427751783Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 15:25:23.433021 containerd[1497]: time="2025-02-13T15:25:23.432961436Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:23.436290 containerd[1497]: time="2025-02-13T15:25:23.436244258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:23.437605 containerd[1497]: time="2025-02-13T15:25:23.437532632Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.188143821s" Feb 13 15:25:23.437605 containerd[1497]: time="2025-02-13T15:25:23.437598077Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:25:23.443453 containerd[1497]: time="2025-02-13T15:25:23.443402984Z" level=info msg="CreateContainer within sandbox \"d42715f1c667e271d55cf6b68469f431a3c7b9a6a9f6afb8787ba409bcb17d75\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:25:23.461606 containerd[1497]: time="2025-02-13T15:25:23.461537554Z" level=info msg="CreateContainer within sandbox \"d42715f1c667e271d55cf6b68469f431a3c7b9a6a9f6afb8787ba409bcb17d75\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"8c7b4cb9875985b7b271154e9d49ff76f43a6212fc46dcd339b2350daba21e74\"" Feb 13 15:25:23.462212 containerd[1497]: time="2025-02-13T15:25:23.462179937Z" level=info msg="StartContainer for \"8c7b4cb9875985b7b271154e9d49ff76f43a6212fc46dcd339b2350daba21e74\"" Feb 13 15:25:23.504804 systemd[1]: Started cri-containerd-8c7b4cb9875985b7b271154e9d49ff76f43a6212fc46dcd339b2350daba21e74.scope - libcontainer container 8c7b4cb9875985b7b271154e9d49ff76f43a6212fc46dcd339b2350daba21e74. Feb 13 15:25:23.535708 containerd[1497]: time="2025-02-13T15:25:23.535662609Z" level=info msg="StartContainer for \"8c7b4cb9875985b7b271154e9d49ff76f43a6212fc46dcd339b2350daba21e74\" returns successfully" Feb 13 15:25:23.576745 kubelet[1831]: E0213 15:25:23.576706 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:23.981820 kubelet[1831]: I0213 15:25:23.981735 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-6x4sz" podStartSLOduration=5.789017691 podStartE2EDuration="10.981713525s" podCreationTimestamp="2025-02-13 15:25:13 +0000 UTC" firstStartedPulling="2025-02-13 15:25:18.249031948 +0000 UTC m=+21.050528024" lastFinishedPulling="2025-02-13 15:25:23.441727782 +0000 UTC m=+26.243223858" observedRunningTime="2025-02-13 15:25:23.981599558 +0000 UTC m=+26.783095634" watchObservedRunningTime="2025-02-13 15:25:23.981713525 +0000 UTC m=+26.783209601" Feb 13 15:25:24.577276 kubelet[1831]: E0213 15:25:24.577213 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:25.578109 kubelet[1831]: E0213 15:25:25.578042 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:26.579261 kubelet[1831]: E0213 15:25:26.579188 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:26.681302 kubelet[1831]: I0213 15:25:26.681248 1831 topology_manager.go:215] "Topology Admit Handler" podUID="ccafe36c-ac2e-4750-bc65-40cb23b683bd" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:25:26.681653 kubelet[1831]: I0213 15:25:26.681612 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ccafe36c-ac2e-4750-bc65-40cb23b683bd-data\") pod \"nfs-server-provisioner-0\" (UID: \"ccafe36c-ac2e-4750-bc65-40cb23b683bd\") " pod="default/nfs-server-provisioner-0" Feb 13 15:25:26.681689 kubelet[1831]: I0213 15:25:26.681664 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npqbg\" (UniqueName: \"kubernetes.io/projected/ccafe36c-ac2e-4750-bc65-40cb23b683bd-kube-api-access-npqbg\") pod \"nfs-server-provisioner-0\" (UID: \"ccafe36c-ac2e-4750-bc65-40cb23b683bd\") " pod="default/nfs-server-provisioner-0" Feb 13 15:25:26.687932 systemd[1]: Created slice kubepods-besteffort-podccafe36c_ac2e_4750_bc65_40cb23b683bd.slice - libcontainer container kubepods-besteffort-podccafe36c_ac2e_4750_bc65_40cb23b683bd.slice. Feb 13 15:25:26.991130 containerd[1497]: time="2025-02-13T15:25:26.990971093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccafe36c-ac2e-4750-bc65-40cb23b683bd,Namespace:default,Attempt:0,}" Feb 13 15:25:27.038466 systemd-networkd[1419]: lxc898cdd7bd0bc: Link UP Feb 13 15:25:27.047665 kernel: eth0: renamed from tmpde3e8 Feb 13 15:25:27.054384 systemd-networkd[1419]: lxc898cdd7bd0bc: Gained carrier Feb 13 15:25:27.340958 containerd[1497]: time="2025-02-13T15:25:27.340693595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:27.341406 containerd[1497]: time="2025-02-13T15:25:27.341349871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:27.341462 containerd[1497]: time="2025-02-13T15:25:27.341398984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:27.341577 containerd[1497]: time="2025-02-13T15:25:27.341524743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:27.365779 systemd[1]: Started cri-containerd-de3e88c5d02e103c33890203cb1e31454d6e6c116e968f245153f20bf6eef517.scope - libcontainer container de3e88c5d02e103c33890203cb1e31454d6e6c116e968f245153f20bf6eef517. Feb 13 15:25:27.381310 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:27.408538 containerd[1497]: time="2025-02-13T15:25:27.408490625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ccafe36c-ac2e-4750-bc65-40cb23b683bd,Namespace:default,Attempt:0,} returns sandbox id \"de3e88c5d02e103c33890203cb1e31454d6e6c116e968f245153f20bf6eef517\"" Feb 13 15:25:27.410386 containerd[1497]: time="2025-02-13T15:25:27.410340947Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:25:27.579616 kubelet[1831]: E0213 15:25:27.579549 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:28.573102 systemd-networkd[1419]: lxc898cdd7bd0bc: Gained IPv6LL Feb 13 15:25:28.580477 kubelet[1831]: E0213 15:25:28.580417 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:28.881850 update_engine[1488]: I20250213 15:25:28.880724 1488 update_attempter.cc:509] Updating boot flags... Feb 13 15:25:29.121695 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3047) Feb 13 15:25:29.204683 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3047) Feb 13 15:25:29.580777 kubelet[1831]: E0213 15:25:29.580615 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:30.582080 kubelet[1831]: E0213 15:25:30.582037 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:30.725062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386763239.mount: Deactivated successfully. Feb 13 15:25:31.582877 kubelet[1831]: E0213 15:25:31.582790 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:32.587802 kubelet[1831]: E0213 15:25:32.587734 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:33.588591 kubelet[1831]: E0213 15:25:33.588544 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:34.589388 kubelet[1831]: E0213 15:25:34.589268 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:35.589681 kubelet[1831]: E0213 15:25:35.589617 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:36.589856 kubelet[1831]: E0213 15:25:36.589786 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:37.032597 containerd[1497]: time="2025-02-13T15:25:37.032509498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:37.033490 containerd[1497]: time="2025-02-13T15:25:37.033453240Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 15:25:37.035229 containerd[1497]: time="2025-02-13T15:25:37.035175550Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:37.038100 containerd[1497]: time="2025-02-13T15:25:37.038000732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:37.039176 containerd[1497]: time="2025-02-13T15:25:37.039136906Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 9.628764239s" Feb 13 15:25:37.039176 containerd[1497]: time="2025-02-13T15:25:37.039172734Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 15:25:37.042094 containerd[1497]: time="2025-02-13T15:25:37.042059361Z" level=info msg="CreateContainer within sandbox \"de3e88c5d02e103c33890203cb1e31454d6e6c116e968f245153f20bf6eef517\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:25:37.054413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536253768.mount: Deactivated successfully. Feb 13 15:25:37.062025 containerd[1497]: time="2025-02-13T15:25:37.061988342Z" level=info msg="CreateContainer within sandbox \"de3e88c5d02e103c33890203cb1e31454d6e6c116e968f245153f20bf6eef517\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1bd334dedd57c4d2668e086e9fc45c255480b6e061f23435759fc46ee5b7544a\"" Feb 13 15:25:37.062693 containerd[1497]: time="2025-02-13T15:25:37.062644871Z" level=info msg="StartContainer for \"1bd334dedd57c4d2668e086e9fc45c255480b6e061f23435759fc46ee5b7544a\"" Feb 13 15:25:37.147923 systemd[1]: Started cri-containerd-1bd334dedd57c4d2668e086e9fc45c255480b6e061f23435759fc46ee5b7544a.scope - libcontainer container 1bd334dedd57c4d2668e086e9fc45c255480b6e061f23435759fc46ee5b7544a. Feb 13 15:25:37.184224 containerd[1497]: time="2025-02-13T15:25:37.184170269Z" level=info msg="StartContainer for \"1bd334dedd57c4d2668e086e9fc45c255480b6e061f23435759fc46ee5b7544a\" returns successfully" Feb 13 15:25:37.462749 kubelet[1831]: E0213 15:25:37.462713 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:37.590470 kubelet[1831]: E0213 15:25:37.590378 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:38.060575 kubelet[1831]: I0213 15:25:38.060491 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.430335596 podStartE2EDuration="12.060471116s" podCreationTimestamp="2025-02-13 15:25:26 +0000 UTC" firstStartedPulling="2025-02-13 15:25:27.410095571 +0000 UTC m=+30.211591647" lastFinishedPulling="2025-02-13 15:25:37.040231101 +0000 UTC m=+39.841727167" observedRunningTime="2025-02-13 15:25:38.060082043 +0000 UTC m=+40.861578119" watchObservedRunningTime="2025-02-13 15:25:38.060471116 +0000 UTC m=+40.861967192" Feb 13 15:25:38.591301 kubelet[1831]: E0213 15:25:38.591248 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:39.591863 kubelet[1831]: E0213 15:25:39.591806 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:40.592539 kubelet[1831]: E0213 15:25:40.592448 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:41.592781 kubelet[1831]: E0213 15:25:41.592587 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:42.593460 kubelet[1831]: E0213 15:25:42.593403 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:43.594503 kubelet[1831]: E0213 15:25:43.594417 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:44.595464 kubelet[1831]: E0213 15:25:44.595379 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:45.596259 kubelet[1831]: E0213 15:25:45.596202 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:46.596562 kubelet[1831]: E0213 15:25:46.596489 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:47.049059 kubelet[1831]: I0213 15:25:47.048998 1831 topology_manager.go:215] "Topology Admit Handler" podUID="1f568a67-0bbc-464c-897a-d53f48f21088" podNamespace="default" podName="test-pod-1" Feb 13 15:25:47.055855 systemd[1]: Created slice kubepods-besteffort-pod1f568a67_0bbc_464c_897a_d53f48f21088.slice - libcontainer container kubepods-besteffort-pod1f568a67_0bbc_464c_897a_d53f48f21088.slice. Feb 13 15:25:47.170036 kubelet[1831]: I0213 15:25:47.169954 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9d6551ff-8f6d-42c9-91f7-8eb78251f415\" (UniqueName: \"kubernetes.io/nfs/1f568a67-0bbc-464c-897a-d53f48f21088-pvc-9d6551ff-8f6d-42c9-91f7-8eb78251f415\") pod \"test-pod-1\" (UID: \"1f568a67-0bbc-464c-897a-d53f48f21088\") " pod="default/test-pod-1" Feb 13 15:25:47.170036 kubelet[1831]: I0213 15:25:47.170020 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg7lk\" (UniqueName: \"kubernetes.io/projected/1f568a67-0bbc-464c-897a-d53f48f21088-kube-api-access-bg7lk\") pod \"test-pod-1\" (UID: \"1f568a67-0bbc-464c-897a-d53f48f21088\") " pod="default/test-pod-1" Feb 13 15:25:47.301673 kernel: FS-Cache: Loaded Feb 13 15:25:47.370907 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:25:47.371052 kernel: RPC: Registered udp transport module. Feb 13 15:25:47.371075 kernel: RPC: Registered tcp transport module. Feb 13 15:25:47.371107 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:25:47.372377 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:25:47.597404 kubelet[1831]: E0213 15:25:47.597345 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:47.660990 kernel: NFS: Registering the id_resolver key type Feb 13 15:25:47.661142 kernel: Key type id_resolver registered Feb 13 15:25:47.661169 kernel: Key type id_legacy registered Feb 13 15:25:47.688473 nfsidmap[3223]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:25:47.693645 nfsidmap[3226]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:25:47.959243 containerd[1497]: time="2025-02-13T15:25:47.959197067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1f568a67-0bbc-464c-897a-d53f48f21088,Namespace:default,Attempt:0,}" Feb 13 15:25:47.993199 systemd-networkd[1419]: lxc433dcba3a1ca: Link UP Feb 13 15:25:48.000663 kernel: eth0: renamed from tmp59a31 Feb 13 15:25:48.014666 systemd-networkd[1419]: lxc433dcba3a1ca: Gained carrier Feb 13 15:25:48.308192 containerd[1497]: time="2025-02-13T15:25:48.308030770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:48.308192 containerd[1497]: time="2025-02-13T15:25:48.308082928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:48.308192 containerd[1497]: time="2025-02-13T15:25:48.308093007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:48.308428 containerd[1497]: time="2025-02-13T15:25:48.308158631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:48.332770 systemd[1]: Started cri-containerd-59a3188ccfb4702c8e5491e2bf0c9e2269371342f45396a42c43b1bd76018c0c.scope - libcontainer container 59a3188ccfb4702c8e5491e2bf0c9e2269371342f45396a42c43b1bd76018c0c. Feb 13 15:25:48.343464 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:25:48.366119 containerd[1497]: time="2025-02-13T15:25:48.366083850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1f568a67-0bbc-464c-897a-d53f48f21088,Namespace:default,Attempt:0,} returns sandbox id \"59a3188ccfb4702c8e5491e2bf0c9e2269371342f45396a42c43b1bd76018c0c\"" Feb 13 15:25:48.367544 containerd[1497]: time="2025-02-13T15:25:48.367526303Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:25:48.597681 kubelet[1831]: E0213 15:25:48.597510 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:48.942894 containerd[1497]: time="2025-02-13T15:25:48.942697695Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:25:48.943515 containerd[1497]: time="2025-02-13T15:25:48.943424532Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:25:48.946362 containerd[1497]: time="2025-02-13T15:25:48.946313647Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 578.761516ms" Feb 13 15:25:48.946362 containerd[1497]: time="2025-02-13T15:25:48.946355596Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 15:25:48.948557 containerd[1497]: time="2025-02-13T15:25:48.948504469Z" level=info msg="CreateContainer within sandbox \"59a3188ccfb4702c8e5491e2bf0c9e2269371342f45396a42c43b1bd76018c0c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:25:48.965007 containerd[1497]: time="2025-02-13T15:25:48.964959303Z" level=info msg="CreateContainer within sandbox \"59a3188ccfb4702c8e5491e2bf0c9e2269371342f45396a42c43b1bd76018c0c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"27c1774b0a289128945aae8c5af61a422e9a88989786080efea51d7ec09b86dc\"" Feb 13 15:25:48.965704 containerd[1497]: time="2025-02-13T15:25:48.965673056Z" level=info msg="StartContainer for \"27c1774b0a289128945aae8c5af61a422e9a88989786080efea51d7ec09b86dc\"" Feb 13 15:25:48.997766 systemd[1]: Started cri-containerd-27c1774b0a289128945aae8c5af61a422e9a88989786080efea51d7ec09b86dc.scope - libcontainer container 27c1774b0a289128945aae8c5af61a422e9a88989786080efea51d7ec09b86dc. Feb 13 15:25:49.046066 containerd[1497]: time="2025-02-13T15:25:49.046010631Z" level=info msg="StartContainer for \"27c1774b0a289128945aae8c5af61a422e9a88989786080efea51d7ec09b86dc\" returns successfully" Feb 13 15:25:49.066531 kubelet[1831]: I0213 15:25:49.066460 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.48658387 podStartE2EDuration="23.066438161s" podCreationTimestamp="2025-02-13 15:25:26 +0000 UTC" firstStartedPulling="2025-02-13 15:25:48.367321508 +0000 UTC m=+51.168817584" lastFinishedPulling="2025-02-13 15:25:48.947175799 +0000 UTC m=+51.748671875" observedRunningTime="2025-02-13 15:25:49.066206906 +0000 UTC m=+51.867702992" watchObservedRunningTime="2025-02-13 15:25:49.066438161 +0000 UTC m=+51.867934227" Feb 13 15:25:49.282442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860493949.mount: Deactivated successfully. Feb 13 15:25:49.597824 kubelet[1831]: E0213 15:25:49.597688 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:49.687898 systemd-networkd[1419]: lxc433dcba3a1ca: Gained IPv6LL Feb 13 15:25:50.598179 kubelet[1831]: E0213 15:25:50.598118 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:51.598575 kubelet[1831]: E0213 15:25:51.598513 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:52.598996 kubelet[1831]: E0213 15:25:52.598920 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:53.599585 kubelet[1831]: E0213 15:25:53.599503 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:54.600665 kubelet[1831]: E0213 15:25:54.600590 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:54.969876 containerd[1497]: time="2025-02-13T15:25:54.969823556Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:25:54.977739 containerd[1497]: time="2025-02-13T15:25:54.977698245Z" level=info msg="StopContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" with timeout 2 (s)" Feb 13 15:25:54.978016 containerd[1497]: time="2025-02-13T15:25:54.977974394Z" level=info msg="Stop container \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" with signal terminated" Feb 13 15:25:54.985281 systemd-networkd[1419]: lxc_health: Link DOWN Feb 13 15:25:54.985289 systemd-networkd[1419]: lxc_health: Lost carrier Feb 13 15:25:55.012986 systemd[1]: cri-containerd-bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06.scope: Deactivated successfully. Feb 13 15:25:55.013286 systemd[1]: cri-containerd-bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06.scope: Consumed 7.221s CPU time. Feb 13 15:25:55.031797 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06-rootfs.mount: Deactivated successfully. Feb 13 15:25:55.143832 containerd[1497]: time="2025-02-13T15:25:55.143761035Z" level=info msg="shim disconnected" id=bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06 namespace=k8s.io Feb 13 15:25:55.143832 containerd[1497]: time="2025-02-13T15:25:55.143827219Z" level=warning msg="cleaning up after shim disconnected" id=bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06 namespace=k8s.io Feb 13 15:25:55.143832 containerd[1497]: time="2025-02-13T15:25:55.143836066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:55.165050 containerd[1497]: time="2025-02-13T15:25:55.164992250Z" level=info msg="StopContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" returns successfully" Feb 13 15:25:55.165733 containerd[1497]: time="2025-02-13T15:25:55.165707324Z" level=info msg="StopPodSandbox for \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\"" Feb 13 15:25:55.165815 containerd[1497]: time="2025-02-13T15:25:55.165760373Z" level=info msg="Container to stop \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:25:55.165858 containerd[1497]: time="2025-02-13T15:25:55.165816269Z" level=info msg="Container to stop \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:25:55.165858 containerd[1497]: time="2025-02-13T15:25:55.165833341Z" level=info msg="Container to stop \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:25:55.165858 containerd[1497]: time="2025-02-13T15:25:55.165846436Z" level=info msg="Container to stop \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:25:55.166068 containerd[1497]: time="2025-02-13T15:25:55.165859500Z" level=info msg="Container to stop \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:25:55.168122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395-shm.mount: Deactivated successfully. Feb 13 15:25:55.173034 systemd[1]: cri-containerd-666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395.scope: Deactivated successfully. Feb 13 15:25:55.192600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395-rootfs.mount: Deactivated successfully. Feb 13 15:25:55.197296 containerd[1497]: time="2025-02-13T15:25:55.197212945Z" level=info msg="shim disconnected" id=666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395 namespace=k8s.io Feb 13 15:25:55.197296 containerd[1497]: time="2025-02-13T15:25:55.197285322Z" level=warning msg="cleaning up after shim disconnected" id=666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395 namespace=k8s.io Feb 13 15:25:55.197296 containerd[1497]: time="2025-02-13T15:25:55.197294309Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:55.210544 containerd[1497]: time="2025-02-13T15:25:55.210483682Z" level=info msg="TearDown network for sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" successfully" Feb 13 15:25:55.210544 containerd[1497]: time="2025-02-13T15:25:55.210533365Z" level=info msg="StopPodSandbox for \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" returns successfully" Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320350 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-kernel\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320413 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-run\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320454 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46e717eb-f65e-4caa-9018-ab5aeda7cf31-clustermesh-secrets\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320482 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t47n7\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-kube-api-access-t47n7\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320502 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hostproc\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320519 kubelet[1831]: I0213 15:25:55.320521 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-xtables-lock\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320542 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-lib-modules\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320526 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320572 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hostproc" (OuterVolumeSpecName: "hostproc") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320566 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-config-path\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320679 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hubble-tls\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.320893 kubelet[1831]: I0213 15:25:55.320705 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-etc-cni-netd\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320729 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-bpf-maps\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320750 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cni-path\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320769 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-net\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320788 1831 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-cgroup\") pod \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\" (UID: \"46e717eb-f65e-4caa-9018-ab5aeda7cf31\") " Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320839 1831 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-run\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.321085 kubelet[1831]: I0213 15:25:55.320855 1831 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hostproc\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.321280 kubelet[1831]: I0213 15:25:55.320882 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.321280 kubelet[1831]: I0213 15:25:55.320526 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325387 kubelet[1831]: I0213 15:25:55.324819 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325387 kubelet[1831]: I0213 15:25:55.324877 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325387 kubelet[1831]: I0213 15:25:55.324896 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325387 kubelet[1831]: I0213 15:25:55.324899 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:25:55.325387 kubelet[1831]: I0213 15:25:55.324926 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325590 kubelet[1831]: I0213 15:25:55.324910 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cni-path" (OuterVolumeSpecName: "cni-path") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325590 kubelet[1831]: I0213 15:25:55.324961 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:25:55.325440 systemd[1]: var-lib-kubelet-pods-46e717eb\x2df65e\x2d4caa\x2d9018\x2dab5aeda7cf31-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:25:55.325772 kubelet[1831]: I0213 15:25:55.325694 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e717eb-f65e-4caa-9018-ab5aeda7cf31-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:25:55.326426 kubelet[1831]: I0213 15:25:55.326398 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-kube-api-access-t47n7" (OuterVolumeSpecName: "kube-api-access-t47n7") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "kube-api-access-t47n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:25:55.326490 kubelet[1831]: I0213 15:25:55.326445 1831 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "46e717eb-f65e-4caa-9018-ab5aeda7cf31" (UID: "46e717eb-f65e-4caa-9018-ab5aeda7cf31"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:25:55.421730 kubelet[1831]: I0213 15:25:55.421682 1831 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-config-path\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421730 kubelet[1831]: I0213 15:25:55.421718 1831 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-hubble-tls\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421730 kubelet[1831]: I0213 15:25:55.421731 1831 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-etc-cni-netd\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421730 kubelet[1831]: I0213 15:25:55.421741 1831 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-bpf-maps\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421751 1831 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cni-path\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421761 1831 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-net\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421770 1831 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-cilium-cgroup\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421780 1831 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-host-proc-sys-kernel\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421790 1831 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/46e717eb-f65e-4caa-9018-ab5aeda7cf31-clustermesh-secrets\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421799 1831 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t47n7\" (UniqueName: \"kubernetes.io/projected/46e717eb-f65e-4caa-9018-ab5aeda7cf31-kube-api-access-t47n7\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421811 1831 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-xtables-lock\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.421945 kubelet[1831]: I0213 15:25:55.421820 1831 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46e717eb-f65e-4caa-9018-ab5aeda7cf31-lib-modules\") on node \"10.0.0.50\" DevicePath \"\"" Feb 13 15:25:55.601090 kubelet[1831]: E0213 15:25:55.600928 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:55.837967 systemd[1]: Removed slice kubepods-burstable-pod46e717eb_f65e_4caa_9018_ab5aeda7cf31.slice - libcontainer container kubepods-burstable-pod46e717eb_f65e_4caa_9018_ab5aeda7cf31.slice. Feb 13 15:25:55.838070 systemd[1]: kubepods-burstable-pod46e717eb_f65e_4caa_9018_ab5aeda7cf31.slice: Consumed 7.335s CPU time. Feb 13 15:25:55.957176 systemd[1]: var-lib-kubelet-pods-46e717eb\x2df65e\x2d4caa\x2d9018\x2dab5aeda7cf31-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt47n7.mount: Deactivated successfully. Feb 13 15:25:55.957291 systemd[1]: var-lib-kubelet-pods-46e717eb\x2df65e\x2d4caa\x2d9018\x2dab5aeda7cf31-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:25:56.072498 kubelet[1831]: I0213 15:25:56.072458 1831 scope.go:117] "RemoveContainer" containerID="bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06" Feb 13 15:25:56.073628 containerd[1497]: time="2025-02-13T15:25:56.073582159Z" level=info msg="RemoveContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\"" Feb 13 15:25:56.077407 containerd[1497]: time="2025-02-13T15:25:56.077381617Z" level=info msg="RemoveContainer for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" returns successfully" Feb 13 15:25:56.077626 kubelet[1831]: I0213 15:25:56.077608 1831 scope.go:117] "RemoveContainer" containerID="4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833" Feb 13 15:25:56.078812 containerd[1497]: time="2025-02-13T15:25:56.078783511Z" level=info msg="RemoveContainer for \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\"" Feb 13 15:25:56.082128 containerd[1497]: time="2025-02-13T15:25:56.082095293Z" level=info msg="RemoveContainer for \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\" returns successfully" Feb 13 15:25:56.082256 kubelet[1831]: I0213 15:25:56.082239 1831 scope.go:117] "RemoveContainer" containerID="55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85" Feb 13 15:25:56.083288 containerd[1497]: time="2025-02-13T15:25:56.083269038Z" level=info msg="RemoveContainer for \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\"" Feb 13 15:25:56.086237 containerd[1497]: time="2025-02-13T15:25:56.086205385Z" level=info msg="RemoveContainer for \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\" returns successfully" Feb 13 15:25:56.086451 kubelet[1831]: I0213 15:25:56.086428 1831 scope.go:117] "RemoveContainer" containerID="3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b" Feb 13 15:25:56.087421 containerd[1497]: time="2025-02-13T15:25:56.087396974Z" level=info msg="RemoveContainer for \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\"" Feb 13 15:25:56.091070 containerd[1497]: time="2025-02-13T15:25:56.091015613Z" level=info msg="RemoveContainer for \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\" returns successfully" Feb 13 15:25:56.091280 kubelet[1831]: I0213 15:25:56.091248 1831 scope.go:117] "RemoveContainer" containerID="3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062" Feb 13 15:25:56.092440 containerd[1497]: time="2025-02-13T15:25:56.092416836Z" level=info msg="RemoveContainer for \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\"" Feb 13 15:25:56.095964 containerd[1497]: time="2025-02-13T15:25:56.095932621Z" level=info msg="RemoveContainer for \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\" returns successfully" Feb 13 15:25:56.096155 kubelet[1831]: I0213 15:25:56.096087 1831 scope.go:117] "RemoveContainer" containerID="bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06" Feb 13 15:25:56.096383 containerd[1497]: time="2025-02-13T15:25:56.096332061Z" level=error msg="ContainerStatus for \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\": not found" Feb 13 15:25:56.096542 kubelet[1831]: E0213 15:25:56.096499 1831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\": not found" containerID="bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06" Feb 13 15:25:56.096672 kubelet[1831]: I0213 15:25:56.096539 1831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06"} err="failed to get container status \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\": rpc error: code = NotFound desc = an error occurred when try to find container \"bd39cc69d66a9b33731df2ffa81264ecd39ace3ad255dde54e1e2fa99e070d06\": not found" Feb 13 15:25:56.096672 kubelet[1831]: I0213 15:25:56.096659 1831 scope.go:117] "RemoveContainer" containerID="4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833" Feb 13 15:25:56.096888 containerd[1497]: time="2025-02-13T15:25:56.096834996Z" level=error msg="ContainerStatus for \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\": not found" Feb 13 15:25:56.097042 kubelet[1831]: E0213 15:25:56.097004 1831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\": not found" containerID="4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833" Feb 13 15:25:56.097146 kubelet[1831]: I0213 15:25:56.097042 1831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833"} err="failed to get container status \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b998104ff4fcbcc4163d6af0d5472aa226030f691362c846e5b6401c661c833\": not found" Feb 13 15:25:56.097146 kubelet[1831]: I0213 15:25:56.097078 1831 scope.go:117] "RemoveContainer" containerID="55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85" Feb 13 15:25:56.097332 containerd[1497]: time="2025-02-13T15:25:56.097306272Z" level=error msg="ContainerStatus for \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\": not found" Feb 13 15:25:56.097459 kubelet[1831]: E0213 15:25:56.097435 1831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\": not found" containerID="55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85" Feb 13 15:25:56.097459 kubelet[1831]: I0213 15:25:56.097457 1831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85"} err="failed to get container status \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\": rpc error: code = NotFound desc = an error occurred when try to find container \"55f56551ac0c5e4d1a13fed3c18a9bdf18bc2461ab0297a6aeaa6458de951c85\": not found" Feb 13 15:25:56.097532 kubelet[1831]: I0213 15:25:56.097472 1831 scope.go:117] "RemoveContainer" containerID="3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b" Feb 13 15:25:56.097721 containerd[1497]: time="2025-02-13T15:25:56.097685564Z" level=error msg="ContainerStatus for \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\": not found" Feb 13 15:25:56.097838 kubelet[1831]: E0213 15:25:56.097818 1831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\": not found" containerID="3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b" Feb 13 15:25:56.097868 kubelet[1831]: I0213 15:25:56.097836 1831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b"} err="failed to get container status \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\": rpc error: code = NotFound desc = an error occurred when try to find container \"3752fe640f247c6b8d8fb8db6c188697c1a1c2aac2212ac4cb8493ade2982b1b\": not found" Feb 13 15:25:56.097868 kubelet[1831]: I0213 15:25:56.097849 1831 scope.go:117] "RemoveContainer" containerID="3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062" Feb 13 15:25:56.098007 containerd[1497]: time="2025-02-13T15:25:56.097975319Z" level=error msg="ContainerStatus for \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\": not found" Feb 13 15:25:56.098109 kubelet[1831]: E0213 15:25:56.098080 1831 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\": not found" containerID="3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062" Feb 13 15:25:56.098178 kubelet[1831]: I0213 15:25:56.098122 1831 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062"} err="failed to get container status \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\": rpc error: code = NotFound desc = an error occurred when try to find container \"3da6bef8647538b427de99706f58fd280b7f6286e92341529a7e5afc74194062\": not found" Feb 13 15:25:56.602139 kubelet[1831]: E0213 15:25:56.602062 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:57.463374 kubelet[1831]: E0213 15:25:57.463310 1831 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:57.474854 containerd[1497]: time="2025-02-13T15:25:57.474813692Z" level=info msg="StopPodSandbox for \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\"" Feb 13 15:25:57.475239 containerd[1497]: time="2025-02-13T15:25:57.474918739Z" level=info msg="TearDown network for sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" successfully" Feb 13 15:25:57.475239 containerd[1497]: time="2025-02-13T15:25:57.474935401Z" level=info msg="StopPodSandbox for \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" returns successfully" Feb 13 15:25:57.475359 containerd[1497]: time="2025-02-13T15:25:57.475326035Z" level=info msg="RemovePodSandbox for \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\"" Feb 13 15:25:57.475390 containerd[1497]: time="2025-02-13T15:25:57.475368905Z" level=info msg="Forcibly stopping sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\"" Feb 13 15:25:57.475475 containerd[1497]: time="2025-02-13T15:25:57.475434498Z" level=info msg="TearDown network for sandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" successfully" Feb 13 15:25:57.479215 containerd[1497]: time="2025-02-13T15:25:57.479177279Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:25:57.479280 containerd[1497]: time="2025-02-13T15:25:57.479222495Z" level=info msg="RemovePodSandbox \"666d12fea703e48fbe506ede37b5492267e008c6d44762e93783e0f2e52b0395\" returns successfully" Feb 13 15:25:57.498590 kubelet[1831]: I0213 15:25:57.498551 1831 topology_manager.go:215] "Topology Admit Handler" podUID="c74297cd-eebb-4ef9-98c2-e37708e32c91" podNamespace="kube-system" podName="cilium-operator-599987898-mnkp4" Feb 13 15:25:57.498724 kubelet[1831]: E0213 15:25:57.498605 1831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="apply-sysctl-overwrites" Feb 13 15:25:57.498724 kubelet[1831]: E0213 15:25:57.498614 1831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="clean-cilium-state" Feb 13 15:25:57.498724 kubelet[1831]: E0213 15:25:57.498620 1831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="mount-cgroup" Feb 13 15:25:57.498724 kubelet[1831]: E0213 15:25:57.498625 1831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="cilium-agent" Feb 13 15:25:57.498724 kubelet[1831]: E0213 15:25:57.498651 1831 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="mount-bpf-fs" Feb 13 15:25:57.498724 kubelet[1831]: I0213 15:25:57.498686 1831 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" containerName="cilium-agent" Feb 13 15:25:57.500745 kubelet[1831]: W0213 15:25:57.500724 1831 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.50" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.50' and this object Feb 13 15:25:57.500789 kubelet[1831]: E0213 15:25:57.500753 1831 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.50" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.50' and this object Feb 13 15:25:57.504551 systemd[1]: Created slice kubepods-besteffort-podc74297cd_eebb_4ef9_98c2_e37708e32c91.slice - libcontainer container kubepods-besteffort-podc74297cd_eebb_4ef9_98c2_e37708e32c91.slice. Feb 13 15:25:57.513770 kubelet[1831]: I0213 15:25:57.513726 1831 topology_manager.go:215] "Topology Admit Handler" podUID="866bf28f-f169-4154-9464-d36a576f6d6a" podNamespace="kube-system" podName="cilium-x7k8c" Feb 13 15:25:57.520065 systemd[1]: Created slice kubepods-burstable-pod866bf28f_f169_4154_9464_d36a576f6d6a.slice - libcontainer container kubepods-burstable-pod866bf28f_f169_4154_9464_d36a576f6d6a.slice. Feb 13 15:25:57.602618 kubelet[1831]: E0213 15:25:57.602537 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:57.634953 kubelet[1831]: I0213 15:25:57.634856 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-bpf-maps\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.634953 kubelet[1831]: I0213 15:25:57.634915 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-etc-cni-netd\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.634953 kubelet[1831]: I0213 15:25:57.634939 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/866bf28f-f169-4154-9464-d36a576f6d6a-clustermesh-secrets\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.634953 kubelet[1831]: I0213 15:25:57.634962 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-ipsec-secrets\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.635178 kubelet[1831]: I0213 15:25:57.634985 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c74297cd-eebb-4ef9-98c2-e37708e32c91-cilium-config-path\") pod \"cilium-operator-599987898-mnkp4\" (UID: \"c74297cd-eebb-4ef9-98c2-e37708e32c91\") " pod="kube-system/cilium-operator-599987898-mnkp4" Feb 13 15:25:57.636090 kubelet[1831]: I0213 15:25:57.636022 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-run\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636290 kubelet[1831]: I0213 15:25:57.636265 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-cgroup\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636335 kubelet[1831]: I0213 15:25:57.636301 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-cni-path\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636335 kubelet[1831]: I0213 15:25:57.636322 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-config-path\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636377 kubelet[1831]: I0213 15:25:57.636340 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-host-proc-sys-net\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636377 kubelet[1831]: I0213 15:25:57.636364 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7qbd\" (UniqueName: \"kubernetes.io/projected/c74297cd-eebb-4ef9-98c2-e37708e32c91-kube-api-access-c7qbd\") pod \"cilium-operator-599987898-mnkp4\" (UID: \"c74297cd-eebb-4ef9-98c2-e37708e32c91\") " pod="kube-system/cilium-operator-599987898-mnkp4" Feb 13 15:25:57.636430 kubelet[1831]: I0213 15:25:57.636394 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-host-proc-sys-kernel\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636430 kubelet[1831]: I0213 15:25:57.636413 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/866bf28f-f169-4154-9464-d36a576f6d6a-hubble-tls\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636473 kubelet[1831]: I0213 15:25:57.636436 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-hostproc\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636473 kubelet[1831]: I0213 15:25:57.636454 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-lib-modules\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636525 kubelet[1831]: I0213 15:25:57.636472 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/866bf28f-f169-4154-9464-d36a576f6d6a-xtables-lock\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.636525 kubelet[1831]: I0213 15:25:57.636493 1831 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729qq\" (UniqueName: \"kubernetes.io/projected/866bf28f-f169-4154-9464-d36a576f6d6a-kube-api-access-729qq\") pod \"cilium-x7k8c\" (UID: \"866bf28f-f169-4154-9464-d36a576f6d6a\") " pod="kube-system/cilium-x7k8c" Feb 13 15:25:57.834500 kubelet[1831]: I0213 15:25:57.834392 1831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e717eb-f65e-4caa-9018-ab5aeda7cf31" path="/var/lib/kubelet/pods/46e717eb-f65e-4caa-9018-ab5aeda7cf31/volumes" Feb 13 15:25:57.847587 kubelet[1831]: E0213 15:25:57.847538 1831 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:25:58.602748 kubelet[1831]: E0213 15:25:58.602678 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:25:58.738537 kubelet[1831]: E0213 15:25:58.738478 1831 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:25:58.738711 kubelet[1831]: E0213 15:25:58.738626 1831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-config-path podName:866bf28f-f169-4154-9464-d36a576f6d6a nodeName:}" failed. No retries permitted until 2025-02-13 15:25:59.238585626 +0000 UTC m=+62.040081702 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/866bf28f-f169-4154-9464-d36a576f6d6a-cilium-config-path") pod "cilium-x7k8c" (UID: "866bf28f-f169-4154-9464-d36a576f6d6a") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:25:58.738711 kubelet[1831]: E0213 15:25:58.738478 1831 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 13 15:25:58.738711 kubelet[1831]: E0213 15:25:58.738699 1831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c74297cd-eebb-4ef9-98c2-e37708e32c91-cilium-config-path podName:c74297cd-eebb-4ef9-98c2-e37708e32c91 nodeName:}" failed. No retries permitted until 2025-02-13 15:25:59.238687047 +0000 UTC m=+62.040183123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c74297cd-eebb-4ef9-98c2-e37708e32c91-cilium-config-path") pod "cilium-operator-599987898-mnkp4" (UID: "c74297cd-eebb-4ef9-98c2-e37708e32c91") : failed to sync configmap cache: timed out waiting for the condition Feb 13 15:25:58.891623 kubelet[1831]: I0213 15:25:58.891465 1831 setters.go:580] "Node became not ready" node="10.0.0.50" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:25:58Z","lastTransitionTime":"2025-02-13T15:25:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:25:59.307417 kubelet[1831]: E0213 15:25:59.307368 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:59.308319 containerd[1497]: time="2025-02-13T15:25:59.307964584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mnkp4,Uid:c74297cd-eebb-4ef9-98c2-e37708e32c91,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:59.328726 containerd[1497]: time="2025-02-13T15:25:59.328597517Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:59.328726 containerd[1497]: time="2025-02-13T15:25:59.328676365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:59.328726 containerd[1497]: time="2025-02-13T15:25:59.328687886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:59.328898 containerd[1497]: time="2025-02-13T15:25:59.328763849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:59.333501 kubelet[1831]: E0213 15:25:59.333029 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:59.334910 containerd[1497]: time="2025-02-13T15:25:59.334874647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7k8c,Uid:866bf28f-f169-4154-9464-d36a576f6d6a,Namespace:kube-system,Attempt:0,}" Feb 13 15:25:59.353797 systemd[1]: Started cri-containerd-35724d563294e5b300c1fb1673041138f1c02c754815c8dbda909c9e0a0298e0.scope - libcontainer container 35724d563294e5b300c1fb1673041138f1c02c754815c8dbda909c9e0a0298e0. Feb 13 15:25:59.362808 containerd[1497]: time="2025-02-13T15:25:59.362455970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:25:59.362808 containerd[1497]: time="2025-02-13T15:25:59.362535530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:25:59.362808 containerd[1497]: time="2025-02-13T15:25:59.362570766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:59.362808 containerd[1497]: time="2025-02-13T15:25:59.362687144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:25:59.385965 systemd[1]: Started cri-containerd-f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553.scope - libcontainer container f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553. Feb 13 15:25:59.398604 containerd[1497]: time="2025-02-13T15:25:59.398519225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-mnkp4,Uid:c74297cd-eebb-4ef9-98c2-e37708e32c91,Namespace:kube-system,Attempt:0,} returns sandbox id \"35724d563294e5b300c1fb1673041138f1c02c754815c8dbda909c9e0a0298e0\"" Feb 13 15:25:59.399272 kubelet[1831]: E0213 15:25:59.399196 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:59.399941 containerd[1497]: time="2025-02-13T15:25:59.399919945Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:25:59.410362 containerd[1497]: time="2025-02-13T15:25:59.410310231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x7k8c,Uid:866bf28f-f169-4154-9464-d36a576f6d6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\"" Feb 13 15:25:59.411423 kubelet[1831]: E0213 15:25:59.410987 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:25:59.412918 containerd[1497]: time="2025-02-13T15:25:59.412882613Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:25:59.426669 containerd[1497]: time="2025-02-13T15:25:59.426604415Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261\"" Feb 13 15:25:59.427189 containerd[1497]: time="2025-02-13T15:25:59.427156272Z" level=info msg="StartContainer for \"2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261\"" Feb 13 15:25:59.452834 systemd[1]: Started cri-containerd-2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261.scope - libcontainer container 2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261. Feb 13 15:25:59.482777 containerd[1497]: time="2025-02-13T15:25:59.482727107Z" level=info msg="StartContainer for \"2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261\" returns successfully" Feb 13 15:25:59.492202 systemd[1]: cri-containerd-2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261.scope: Deactivated successfully. Feb 13 15:25:59.526982 containerd[1497]: time="2025-02-13T15:25:59.526907037Z" level=info msg="shim disconnected" id=2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261 namespace=k8s.io Feb 13 15:25:59.527199 containerd[1497]: time="2025-02-13T15:25:59.526986115Z" level=warning msg="cleaning up after shim disconnected" id=2703c3565fcb622485d79f3bdd27e23bd5080397f98d119220df584e65af9261 namespace=k8s.io Feb 13 15:25:59.527199 containerd[1497]: time="2025-02-13T15:25:59.527000482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:25:59.603326 kubelet[1831]: E0213 15:25:59.603117 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:00.082348 kubelet[1831]: E0213 15:26:00.082313 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:00.084192 containerd[1497]: time="2025-02-13T15:26:00.084146626Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:26:00.096955 containerd[1497]: time="2025-02-13T15:26:00.096905207Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b\"" Feb 13 15:26:00.097418 containerd[1497]: time="2025-02-13T15:26:00.097369228Z" level=info msg="StartContainer for \"dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b\"" Feb 13 15:26:00.132818 systemd[1]: Started cri-containerd-dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b.scope - libcontainer container dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b. Feb 13 15:26:00.171447 containerd[1497]: time="2025-02-13T15:26:00.171359205Z" level=info msg="StartContainer for \"dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b\" returns successfully" Feb 13 15:26:00.182309 systemd[1]: cri-containerd-dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b.scope: Deactivated successfully. Feb 13 15:26:00.383154 containerd[1497]: time="2025-02-13T15:26:00.382976783Z" level=info msg="shim disconnected" id=dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b namespace=k8s.io Feb 13 15:26:00.383154 containerd[1497]: time="2025-02-13T15:26:00.383052936Z" level=warning msg="cleaning up after shim disconnected" id=dfaf03e9ae6f48bd7cde0de254d9c3e807fa764882eec0996c55075cca91d47b namespace=k8s.io Feb 13 15:26:00.383154 containerd[1497]: time="2025-02-13T15:26:00.383061812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:00.604388 kubelet[1831]: E0213 15:26:00.604323 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:01.085846 kubelet[1831]: E0213 15:26:01.085815 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:01.087652 containerd[1497]: time="2025-02-13T15:26:01.087590520Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:26:01.104147 containerd[1497]: time="2025-02-13T15:26:01.104072641Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9\"" Feb 13 15:26:01.104773 containerd[1497]: time="2025-02-13T15:26:01.104727541Z" level=info msg="StartContainer for \"8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9\"" Feb 13 15:26:01.137786 systemd[1]: Started cri-containerd-8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9.scope - libcontainer container 8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9. Feb 13 15:26:01.172312 containerd[1497]: time="2025-02-13T15:26:01.172266500Z" level=info msg="StartContainer for \"8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9\" returns successfully" Feb 13 15:26:01.173890 systemd[1]: cri-containerd-8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9.scope: Deactivated successfully. Feb 13 15:26:01.199410 containerd[1497]: time="2025-02-13T15:26:01.199333779Z" level=info msg="shim disconnected" id=8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9 namespace=k8s.io Feb 13 15:26:01.199410 containerd[1497]: time="2025-02-13T15:26:01.199402538Z" level=warning msg="cleaning up after shim disconnected" id=8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9 namespace=k8s.io Feb 13 15:26:01.199410 containerd[1497]: time="2025-02-13T15:26:01.199411766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:01.317475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b7e7b8fd97bc7a3ff948356c286a4bdfb4edb77bb3009780b3734470d9dcab9-rootfs.mount: Deactivated successfully. Feb 13 15:26:01.582506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1190537333.mount: Deactivated successfully. Feb 13 15:26:01.604506 kubelet[1831]: E0213 15:26:01.604459 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:01.915291 containerd[1497]: time="2025-02-13T15:26:01.915179824Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:01.916065 containerd[1497]: time="2025-02-13T15:26:01.916031193Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 15:26:01.917191 containerd[1497]: time="2025-02-13T15:26:01.917169550Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:01.918528 containerd[1497]: time="2025-02-13T15:26:01.918485461Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.518539155s" Feb 13 15:26:01.918565 containerd[1497]: time="2025-02-13T15:26:01.918528402Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 15:26:01.920508 containerd[1497]: time="2025-02-13T15:26:01.920486108Z" level=info msg="CreateContainer within sandbox \"35724d563294e5b300c1fb1673041138f1c02c754815c8dbda909c9e0a0298e0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:26:01.932115 containerd[1497]: time="2025-02-13T15:26:01.932088736Z" level=info msg="CreateContainer within sandbox \"35724d563294e5b300c1fb1673041138f1c02c754815c8dbda909c9e0a0298e0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dc8ca397e698bd3483dda2ec0dc8f6782f0d952ee456c28810ce08457517a482\"" Feb 13 15:26:01.933180 containerd[1497]: time="2025-02-13T15:26:01.932485932Z" level=info msg="StartContainer for \"dc8ca397e698bd3483dda2ec0dc8f6782f0d952ee456c28810ce08457517a482\"" Feb 13 15:26:01.969846 systemd[1]: Started cri-containerd-dc8ca397e698bd3483dda2ec0dc8f6782f0d952ee456c28810ce08457517a482.scope - libcontainer container dc8ca397e698bd3483dda2ec0dc8f6782f0d952ee456c28810ce08457517a482. Feb 13 15:26:02.098663 containerd[1497]: time="2025-02-13T15:26:02.097915935Z" level=info msg="StartContainer for \"dc8ca397e698bd3483dda2ec0dc8f6782f0d952ee456c28810ce08457517a482\" returns successfully" Feb 13 15:26:02.101697 kubelet[1831]: E0213 15:26:02.101665 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:02.105899 containerd[1497]: time="2025-02-13T15:26:02.105853269Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:26:02.108014 kubelet[1831]: E0213 15:26:02.107971 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:02.123207 containerd[1497]: time="2025-02-13T15:26:02.123144717Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990\"" Feb 13 15:26:02.123746 containerd[1497]: time="2025-02-13T15:26:02.123719546Z" level=info msg="StartContainer for \"5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990\"" Feb 13 15:26:02.129265 kubelet[1831]: I0213 15:26:02.127525 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-mnkp4" podStartSLOduration=2.60796018 podStartE2EDuration="5.127495216s" podCreationTimestamp="2025-02-13 15:25:57 +0000 UTC" firstStartedPulling="2025-02-13 15:25:59.399620554 +0000 UTC m=+62.201116630" lastFinishedPulling="2025-02-13 15:26:01.91915559 +0000 UTC m=+64.720651666" observedRunningTime="2025-02-13 15:26:02.127040282 +0000 UTC m=+64.928536358" watchObservedRunningTime="2025-02-13 15:26:02.127495216 +0000 UTC m=+64.928991312" Feb 13 15:26:02.161823 systemd[1]: Started cri-containerd-5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990.scope - libcontainer container 5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990. Feb 13 15:26:02.220762 systemd[1]: cri-containerd-5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990.scope: Deactivated successfully. Feb 13 15:26:02.222103 containerd[1497]: time="2025-02-13T15:26:02.222053982Z" level=info msg="StartContainer for \"5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990\" returns successfully" Feb 13 15:26:02.264802 containerd[1497]: time="2025-02-13T15:26:02.264734975Z" level=info msg="shim disconnected" id=5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990 namespace=k8s.io Feb 13 15:26:02.264802 containerd[1497]: time="2025-02-13T15:26:02.264801981Z" level=warning msg="cleaning up after shim disconnected" id=5410868e828b48a1f1a3e99393b1e547ae4165a8dfdc27951e5874334b882990 namespace=k8s.io Feb 13 15:26:02.265042 containerd[1497]: time="2025-02-13T15:26:02.264812671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:02.604773 kubelet[1831]: E0213 15:26:02.604605 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:02.849279 kubelet[1831]: E0213 15:26:02.849223 1831 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:26:03.110336 kubelet[1831]: E0213 15:26:03.110300 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:03.110472 kubelet[1831]: E0213 15:26:03.110374 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:03.112109 containerd[1497]: time="2025-02-13T15:26:03.112080060Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:26:03.407781 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2049670546.mount: Deactivated successfully. Feb 13 15:26:03.411284 containerd[1497]: time="2025-02-13T15:26:03.411241747Z" level=info msg="CreateContainer within sandbox \"f37d0650023b89e1bdb410d4dbf50cf6937699f354a477b89cd8999aa1939553\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d0187b0c0080a5356873b630043d2b7b3e4f827f03af7c70ed8b7d21668ddcf2\"" Feb 13 15:26:03.411786 containerd[1497]: time="2025-02-13T15:26:03.411760321Z" level=info msg="StartContainer for \"d0187b0c0080a5356873b630043d2b7b3e4f827f03af7c70ed8b7d21668ddcf2\"" Feb 13 15:26:03.441760 systemd[1]: Started cri-containerd-d0187b0c0080a5356873b630043d2b7b3e4f827f03af7c70ed8b7d21668ddcf2.scope - libcontainer container d0187b0c0080a5356873b630043d2b7b3e4f827f03af7c70ed8b7d21668ddcf2. Feb 13 15:26:03.470667 containerd[1497]: time="2025-02-13T15:26:03.470621709Z" level=info msg="StartContainer for \"d0187b0c0080a5356873b630043d2b7b3e4f827f03af7c70ed8b7d21668ddcf2\" returns successfully" Feb 13 15:26:03.605409 kubelet[1831]: E0213 15:26:03.605354 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:03.881666 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 15:26:04.116805 kubelet[1831]: E0213 15:26:04.116766 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:04.130311 kubelet[1831]: I0213 15:26:04.130234 1831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-x7k8c" podStartSLOduration=7.130211701 podStartE2EDuration="7.130211701s" podCreationTimestamp="2025-02-13 15:25:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:26:04.130083691 +0000 UTC m=+66.931579767" watchObservedRunningTime="2025-02-13 15:26:04.130211701 +0000 UTC m=+66.931707777" Feb 13 15:26:04.606318 kubelet[1831]: E0213 15:26:04.606241 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:05.334387 kubelet[1831]: E0213 15:26:05.334335 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:05.607692 kubelet[1831]: E0213 15:26:05.607358 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:06.607859 kubelet[1831]: E0213 15:26:06.607801 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:07.017546 systemd-networkd[1419]: lxc_health: Link UP Feb 13 15:26:07.026244 systemd-networkd[1419]: lxc_health: Gained carrier Feb 13 15:26:07.335382 kubelet[1831]: E0213 15:26:07.335206 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:07.609114 kubelet[1831]: E0213 15:26:07.608968 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:08.119887 systemd-networkd[1419]: lxc_health: Gained IPv6LL Feb 13 15:26:08.125872 kubelet[1831]: E0213 15:26:08.125810 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:08.609763 kubelet[1831]: E0213 15:26:08.609693 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:09.127150 kubelet[1831]: E0213 15:26:09.127114 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:09.610456 kubelet[1831]: E0213 15:26:09.610379 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:10.612829 kubelet[1831]: E0213 15:26:10.612740 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:11.613656 kubelet[1831]: E0213 15:26:11.613561 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:12.614669 kubelet[1831]: E0213 15:26:12.614568 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:13.615390 kubelet[1831]: E0213 15:26:13.615313 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:14.616130 kubelet[1831]: E0213 15:26:14.616079 1831 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:14.831863 kubelet[1831]: E0213 15:26:14.831808 1831 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"