Jan 30 13:15:32.880492 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 13:15:32.880513 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:15:32.880525 kernel: BIOS-provided physical RAM map: Jan 30 13:15:32.880532 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:15:32.880558 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:15:32.880565 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:15:32.880572 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:15:32.880579 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:15:32.880586 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 13:15:32.880592 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 13:15:32.880599 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 30 13:15:32.880608 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 13:15:32.880614 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 13:15:32.880621 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 13:15:32.880629 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 13:15:32.880636 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:15:32.880646 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 13:15:32.880653 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 13:15:32.880660 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 13:15:32.880667 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 13:15:32.880673 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 13:15:32.880680 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:15:32.880687 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:15:32.880694 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:15:32.880701 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 13:15:32.880708 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:15:32.880715 kernel: NX (Execute Disable) protection: active Jan 30 13:15:32.880724 kernel: APIC: Static calls initialized Jan 30 13:15:32.880730 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 13:15:32.880738 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 13:15:32.880744 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 13:15:32.880751 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 13:15:32.880758 kernel: extended physical RAM map: Jan 30 13:15:32.880765 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:15:32.880772 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 13:15:32.880779 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 13:15:32.880786 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 13:15:32.880793 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 13:15:32.880800 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 13:15:32.880809 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 13:15:32.880820 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 30 13:15:32.880827 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 30 13:15:32.880834 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 30 13:15:32.880841 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 30 13:15:32.880919 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 30 13:15:32.880931 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 13:15:32.880938 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 13:15:32.880945 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 13:15:32.880953 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 13:15:32.880960 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 13:15:32.880967 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 13:15:32.880975 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 13:15:32.880985 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 13:15:32.880994 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 13:15:32.881007 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 13:15:32.881017 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 13:15:32.881025 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 13:15:32.881032 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:15:32.881039 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 13:15:32.881047 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:15:32.881054 kernel: efi: EFI v2.7 by EDK II Jan 30 13:15:32.881061 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 30 13:15:32.881070 kernel: random: crng init done Jan 30 13:15:32.881080 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 30 13:15:32.881090 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 30 13:15:32.881099 kernel: secureboot: Secure boot disabled Jan 30 13:15:32.881112 kernel: SMBIOS 2.8 present. Jan 30 13:15:32.881122 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 30 13:15:32.881129 kernel: Hypervisor detected: KVM Jan 30 13:15:32.881136 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:15:32.881144 kernel: kvm-clock: using sched offset of 2590284593 cycles Jan 30 13:15:32.881152 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:15:32.881161 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:15:32.881172 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:15:32.881182 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:15:32.881198 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 30 13:15:32.881211 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:15:32.881221 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:15:32.881231 kernel: Using GB pages for direct mapping Jan 30 13:15:32.881238 kernel: ACPI: Early table checksum verification disabled Jan 30 13:15:32.881256 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 13:15:32.881267 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:15:32.881277 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881287 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881297 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 13:15:32.881310 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881320 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881330 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881340 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:15:32.881350 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 13:15:32.881360 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 13:15:32.881370 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 13:15:32.881380 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 13:15:32.881393 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 13:15:32.881402 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 13:15:32.881412 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 13:15:32.881422 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 13:15:32.881432 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 13:15:32.881446 kernel: No NUMA configuration found Jan 30 13:15:32.881468 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 30 13:15:32.881488 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 30 13:15:32.881505 kernel: Zone ranges: Jan 30 13:15:32.881533 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:15:32.881557 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 30 13:15:32.881578 kernel: Normal empty Jan 30 13:15:32.881599 kernel: Movable zone start for each node Jan 30 13:15:32.881609 kernel: Early memory node ranges Jan 30 13:15:32.881619 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:15:32.881629 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 13:15:32.881639 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 13:15:32.881649 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 30 13:15:32.881659 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 30 13:15:32.881671 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 30 13:15:32.881681 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 30 13:15:32.881691 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 30 13:15:32.881701 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 30 13:15:32.881712 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:15:32.881722 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:15:32.881740 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 13:15:32.881753 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:15:32.881763 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 30 13:15:32.881777 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 30 13:15:32.881788 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 13:15:32.881799 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 30 13:15:32.881811 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 30 13:15:32.881822 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:15:32.881832 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:15:32.881855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:15:32.881867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:15:32.881880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:15:32.881890 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:15:32.881900 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:15:32.881911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:15:32.881921 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:15:32.881931 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:15:32.881942 kernel: TSC deadline timer available Jan 30 13:15:32.881952 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:15:32.881962 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:15:32.881975 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:15:32.881985 kernel: kvm-guest: setup PV sched yield Jan 30 13:15:32.881996 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 30 13:15:32.882006 kernel: Booting paravirtualized kernel on KVM Jan 30 13:15:32.882016 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:15:32.882027 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:15:32.882037 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:15:32.882047 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:15:32.882055 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:15:32.882065 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:15:32.882073 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:15:32.882082 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:15:32.882090 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:15:32.882098 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:15:32.882105 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:15:32.882113 kernel: Fallback order for Node 0: 0 Jan 30 13:15:32.882121 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 30 13:15:32.882128 kernel: Policy zone: DMA32 Jan 30 13:15:32.882138 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:15:32.882146 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Jan 30 13:15:32.882154 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:15:32.882162 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 13:15:32.882170 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:15:32.882177 kernel: Dynamic Preempt: voluntary Jan 30 13:15:32.882185 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:15:32.882193 kernel: rcu: RCU event tracing is enabled. Jan 30 13:15:32.882202 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:15:32.882213 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:15:32.882222 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:15:32.882231 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:15:32.882248 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:15:32.882256 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:15:32.882263 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:15:32.882271 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:15:32.882279 kernel: Console: colour dummy device 80x25 Jan 30 13:15:32.882286 kernel: printk: console [ttyS0] enabled Jan 30 13:15:32.882297 kernel: ACPI: Core revision 20230628 Jan 30 13:15:32.882305 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:15:32.882312 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:15:32.882320 kernel: x2apic enabled Jan 30 13:15:32.882328 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:15:32.882336 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:15:32.882343 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:15:32.882351 kernel: kvm-guest: setup PV IPIs Jan 30 13:15:32.882359 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:15:32.882369 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:15:32.882377 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:15:32.882384 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:15:32.882392 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:15:32.882400 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:15:32.882408 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:15:32.882415 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:15:32.882423 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:15:32.882431 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:15:32.882441 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:15:32.882448 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:15:32.882456 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:15:32.882464 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:15:32.882472 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:15:32.882480 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:15:32.882488 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:15:32.882496 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:15:32.882505 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:15:32.882513 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:15:32.882521 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:15:32.882529 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:15:32.882537 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:15:32.882544 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:15:32.882552 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:15:32.882560 kernel: landlock: Up and running. Jan 30 13:15:32.882567 kernel: SELinux: Initializing. Jan 30 13:15:32.882577 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:15:32.882585 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:15:32.882593 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:15:32.882601 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:15:32.882608 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:15:32.882616 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:15:32.882624 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:15:32.882632 kernel: ... version: 0 Jan 30 13:15:32.882639 kernel: ... bit width: 48 Jan 30 13:15:32.882649 kernel: ... generic registers: 6 Jan 30 13:15:32.882657 kernel: ... value mask: 0000ffffffffffff Jan 30 13:15:32.882664 kernel: ... max period: 00007fffffffffff Jan 30 13:15:32.882672 kernel: ... fixed-purpose events: 0 Jan 30 13:15:32.882680 kernel: ... event mask: 000000000000003f Jan 30 13:15:32.882692 kernel: signal: max sigframe size: 1776 Jan 30 13:15:32.882703 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:15:32.882711 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:15:32.882718 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:15:32.882729 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:15:32.882736 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:15:32.882744 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:15:32.882751 kernel: smpboot: Max logical packages: 1 Jan 30 13:15:32.882759 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:15:32.882767 kernel: devtmpfs: initialized Jan 30 13:15:32.882775 kernel: x86/mm: Memory block size: 128MB Jan 30 13:15:32.882782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 13:15:32.882790 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 13:15:32.882800 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 30 13:15:32.882808 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 13:15:32.882816 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 30 13:15:32.882824 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 13:15:32.882831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:15:32.882839 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:15:32.882877 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:15:32.882885 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:15:32.882892 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:15:32.882903 kernel: audit: type=2000 audit(1738242933.534:1): state=initialized audit_enabled=0 res=1 Jan 30 13:15:32.882910 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:15:32.882927 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:15:32.882936 kernel: cpuidle: using governor menu Jan 30 13:15:32.882951 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:15:32.882959 kernel: dca service started, version 1.12.1 Jan 30 13:15:32.882967 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 13:15:32.882974 kernel: PCI: Using configuration type 1 for base access Jan 30 13:15:32.882982 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:15:32.882993 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:15:32.883001 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:15:32.883008 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:15:32.883016 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:15:32.883024 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:15:32.883031 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:15:32.883039 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:15:32.883047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:15:32.883054 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:15:32.883064 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:15:32.883072 kernel: ACPI: Interpreter enabled Jan 30 13:15:32.883079 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:15:32.883087 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:15:32.883095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:15:32.883103 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:15:32.883110 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:15:32.883118 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:15:32.883318 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:15:32.883454 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:15:32.883576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:15:32.883586 kernel: PCI host bridge to bus 0000:00 Jan 30 13:15:32.883710 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:15:32.883824 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:15:32.883953 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:15:32.884069 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 30 13:15:32.884194 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 30 13:15:32.884315 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 30 13:15:32.884428 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:15:32.884566 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:15:32.884703 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:15:32.884830 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 13:15:32.884968 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 13:15:32.885122 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 13:15:32.887077 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 13:15:32.887235 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:15:32.887382 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:15:32.887508 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 13:15:32.887641 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 13:15:32.887766 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 30 13:15:32.889342 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:15:32.889476 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 13:15:32.889599 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 13:15:32.889723 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 30 13:15:32.889868 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:15:32.890011 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 13:15:32.890133 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 13:15:32.890269 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 30 13:15:32.890394 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 13:15:32.890525 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:15:32.890658 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:15:32.892071 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:15:32.892203 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 13:15:32.892333 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 13:15:32.892466 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:15:32.892591 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 13:15:32.892602 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:15:32.892611 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:15:32.892619 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:15:32.892631 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:15:32.892640 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:15:32.892647 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:15:32.892656 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:15:32.892664 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:15:32.892672 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:15:32.892680 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:15:32.892688 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:15:32.892696 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:15:32.892707 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:15:32.892714 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:15:32.892722 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:15:32.892731 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:15:32.892739 kernel: iommu: Default domain type: Translated Jan 30 13:15:32.892747 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:15:32.892755 kernel: efivars: Registered efivars operations Jan 30 13:15:32.892763 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:15:32.892772 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:15:32.892783 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 13:15:32.892791 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 30 13:15:32.892799 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 30 13:15:32.892807 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 30 13:15:32.892816 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 30 13:15:32.892824 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 30 13:15:32.892832 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 30 13:15:32.892840 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 30 13:15:32.894097 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:15:32.894237 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:15:32.894387 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:15:32.894398 kernel: vgaarb: loaded Jan 30 13:15:32.894407 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:15:32.894415 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:15:32.894423 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:15:32.894432 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:15:32.894440 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:15:32.894453 kernel: pnp: PnP ACPI init Jan 30 13:15:32.894588 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 30 13:15:32.894602 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:15:32.894610 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:15:32.894619 kernel: NET: Registered PF_INET protocol family Jan 30 13:15:32.894649 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:15:32.894660 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:15:32.894668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:15:32.894679 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:15:32.894688 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:15:32.894696 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:15:32.894704 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:15:32.894713 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:15:32.894721 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:15:32.894730 kernel: NET: Registered PF_XDP protocol family Jan 30 13:15:32.894870 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 13:15:32.894998 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 13:15:32.895124 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:15:32.895239 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:15:32.895361 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:15:32.895473 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 30 13:15:32.895583 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 30 13:15:32.895693 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 30 13:15:32.895704 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:15:32.895713 kernel: Initialise system trusted keyrings Jan 30 13:15:32.895725 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:15:32.895733 kernel: Key type asymmetric registered Jan 30 13:15:32.895742 kernel: Asymmetric key parser 'x509' registered Jan 30 13:15:32.895750 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:15:32.895759 kernel: io scheduler mq-deadline registered Jan 30 13:15:32.895767 kernel: io scheduler kyber registered Jan 30 13:15:32.895775 kernel: io scheduler bfq registered Jan 30 13:15:32.895783 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:15:32.895792 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:15:32.895803 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:15:32.895814 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:15:32.895823 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:15:32.895831 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:15:32.895840 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:15:32.895861 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:15:32.895873 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:15:32.896000 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:15:32.896013 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:15:32.896128 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:15:32.896251 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:15:32 UTC (1738242932) Jan 30 13:15:32.896369 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 13:15:32.896380 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:15:32.896393 kernel: efifb: probing for efifb Jan 30 13:15:32.896402 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 30 13:15:32.896410 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 30 13:15:32.896430 kernel: efifb: scrolling: redraw Jan 30 13:15:32.896439 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:15:32.896448 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 13:15:32.896456 kernel: fb0: EFI VGA frame buffer device Jan 30 13:15:32.896464 kernel: pstore: Using crash dump compression: deflate Jan 30 13:15:32.896473 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:15:32.896481 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:15:32.896492 kernel: Segment Routing with IPv6 Jan 30 13:15:32.896501 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:15:32.896509 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:15:32.896517 kernel: Key type dns_resolver registered Jan 30 13:15:32.896525 kernel: IPI shorthand broadcast: enabled Jan 30 13:15:32.896533 kernel: sched_clock: Marking stable (565002974, 155666433)->(769181480, -48512073) Jan 30 13:15:32.896542 kernel: registered taskstats version 1 Jan 30 13:15:32.896550 kernel: Loading compiled-in X.509 certificates Jan 30 13:15:32.896558 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 13:15:32.896570 kernel: Key type .fscrypt registered Jan 30 13:15:32.896584 kernel: Key type fscrypt-provisioning registered Jan 30 13:15:32.896593 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:15:32.896601 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:15:32.896609 kernel: ima: No architecture policies found Jan 30 13:15:32.896617 kernel: clk: Disabling unused clocks Jan 30 13:15:32.896625 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 13:15:32.896634 kernel: Write protecting the kernel read-only data: 38912k Jan 30 13:15:32.896645 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 13:15:32.896653 kernel: Run /init as init process Jan 30 13:15:32.896661 kernel: with arguments: Jan 30 13:15:32.896669 kernel: /init Jan 30 13:15:32.896677 kernel: with environment: Jan 30 13:15:32.896686 kernel: HOME=/ Jan 30 13:15:32.896694 kernel: TERM=linux Jan 30 13:15:32.896702 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:15:32.896713 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:15:32.896727 systemd[1]: Detected virtualization kvm. Jan 30 13:15:32.896736 systemd[1]: Detected architecture x86-64. Jan 30 13:15:32.896745 systemd[1]: Running in initrd. Jan 30 13:15:32.896754 systemd[1]: No hostname configured, using default hostname. Jan 30 13:15:32.896762 systemd[1]: Hostname set to . Jan 30 13:15:32.896771 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:15:32.896780 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:15:32.896789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:15:32.896801 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:15:32.896810 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:15:32.896820 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:15:32.896829 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:15:32.896838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:15:32.896910 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:15:32.896923 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:15:32.896932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:15:32.896941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:15:32.896950 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:15:32.896959 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:15:32.896968 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:15:32.896977 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:15:32.896985 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:15:32.896994 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:15:32.897006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:15:32.897015 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:15:32.897024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:15:32.897033 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:15:32.897042 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:15:32.897051 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:15:32.897059 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:15:32.897068 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:15:32.897079 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:15:32.897088 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:15:32.897097 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:15:32.897106 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:15:32.897115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:32.897123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:15:32.897132 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:15:32.897141 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:15:32.897153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:15:32.897181 systemd-journald[194]: Collecting audit messages is disabled. Jan 30 13:15:32.897208 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:32.897217 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:15:32.897228 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:15:32.897238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:15:32.897256 systemd-journald[194]: Journal started Jan 30 13:15:32.897283 systemd-journald[194]: Runtime Journal (/run/log/journal/09ef0fc140134f1793f2afde0d8d6a49) is 6.0M, max 48.2M, 42.2M free. Jan 30 13:15:32.882972 systemd-modules-load[195]: Inserted module 'overlay' Jan 30 13:15:32.901344 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:15:32.903337 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:15:32.914039 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:15:32.915370 systemd-modules-load[195]: Inserted module 'br_netfilter' Jan 30 13:15:32.916366 kernel: Bridge firewalling registered Jan 30 13:15:32.917005 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:15:32.917718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:15:32.920111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:15:32.923709 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:15:32.926237 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:15:32.936724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:15:32.940080 dracut-cmdline[224]: dracut-dracut-053 Jan 30 13:15:32.940120 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:15:32.943986 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 13:15:32.952973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:15:32.984395 systemd-resolved[240]: Positive Trust Anchors: Jan 30 13:15:32.984409 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:15:32.984449 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:15:32.987563 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 30 13:15:32.988881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:15:32.995770 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:15:33.031885 kernel: SCSI subsystem initialized Jan 30 13:15:33.040878 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:15:33.053898 kernel: iscsi: registered transport (tcp) Jan 30 13:15:33.079909 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:15:33.079988 kernel: QLogic iSCSI HBA Driver Jan 30 13:15:33.126092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:15:33.133992 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:15:33.157881 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:15:33.157930 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:15:33.159733 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:15:33.202893 kernel: raid6: avx2x4 gen() 21585 MB/s Jan 30 13:15:33.219886 kernel: raid6: avx2x2 gen() 24685 MB/s Jan 30 13:15:33.236994 kernel: raid6: avx2x1 gen() 25620 MB/s Jan 30 13:15:33.237061 kernel: raid6: using algorithm avx2x1 gen() 25620 MB/s Jan 30 13:15:33.255127 kernel: raid6: .... xor() 15554 MB/s, rmw enabled Jan 30 13:15:33.255200 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:15:33.275878 kernel: xor: automatically using best checksumming function avx Jan 30 13:15:33.421882 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:15:33.434402 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:15:33.447974 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:15:33.461454 systemd-udevd[415]: Using default interface naming scheme 'v255'. Jan 30 13:15:33.466221 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:15:33.475022 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:15:33.487511 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 30 13:15:33.516737 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:15:33.526020 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:15:33.589084 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:15:33.601987 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:15:33.613066 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:15:33.615260 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:15:33.618386 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:15:33.619571 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:15:33.632272 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:15:33.667153 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:15:33.667430 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:15:33.667455 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:15:33.667472 kernel: GPT:9289727 != 19775487 Jan 30 13:15:33.667505 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:15:33.667524 kernel: GPT:9289727 != 19775487 Jan 30 13:15:33.667542 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:15:33.667562 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:15:33.667580 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:15:33.667600 kernel: AES CTR mode by8 optimization enabled Jan 30 13:15:33.631033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:15:33.639766 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:15:33.664307 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:15:33.664467 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:15:33.675091 kernel: libata version 3.00 loaded. Jan 30 13:15:33.665895 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:15:33.667767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:15:33.668068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:33.672670 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:33.684264 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:33.700445 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:15:33.721252 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:15:33.721275 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (462) Jan 30 13:15:33.721287 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:15:33.721441 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:15:33.721583 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (464) Jan 30 13:15:33.721596 kernel: scsi host0: ahci Jan 30 13:15:33.721743 kernel: scsi host1: ahci Jan 30 13:15:33.721902 kernel: scsi host2: ahci Jan 30 13:15:33.722052 kernel: scsi host3: ahci Jan 30 13:15:33.722191 kernel: scsi host4: ahci Jan 30 13:15:33.722345 kernel: scsi host5: ahci Jan 30 13:15:33.722503 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 13:15:33.722515 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 13:15:33.722525 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 13:15:33.722536 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 13:15:33.722549 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 13:15:33.722560 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 13:15:33.701978 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:15:33.716696 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:15:33.738706 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:15:33.739952 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:15:33.746952 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:15:33.756952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:15:33.758074 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:15:33.758127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:33.760506 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:33.763503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:33.768922 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:15:33.768945 disk-uuid[565]: Primary Header is updated. Jan 30 13:15:33.768945 disk-uuid[565]: Secondary Entries is updated. Jan 30 13:15:33.768945 disk-uuid[565]: Secondary Header is updated. Jan 30 13:15:33.772883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:15:33.781535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:33.792023 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:15:33.815786 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:15:34.032955 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:15:34.033030 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:15:34.033042 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:15:34.033873 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:15:34.034883 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:15:34.035879 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:15:34.036872 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:15:34.036889 kernel: ata3.00: applying bridge limits Jan 30 13:15:34.037886 kernel: ata3.00: configured for UDMA/100 Jan 30 13:15:34.039867 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:15:34.083404 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:15:34.096451 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:15:34.096469 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:15:34.774873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:15:34.775329 disk-uuid[567]: The operation has completed successfully. Jan 30 13:15:34.805524 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:15:34.805644 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:15:34.835024 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:15:34.838624 sh[596]: Success Jan 30 13:15:34.850904 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:15:34.884425 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:15:34.904515 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:15:34.907386 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:15:34.918463 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 13:15:34.918499 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:15:34.918514 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:15:34.919500 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:15:34.920253 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:15:34.925148 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:15:34.927444 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:15:34.934979 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:15:34.936090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:15:34.950287 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:15:34.950335 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:15:34.950351 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:15:34.953974 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:15:34.963061 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:15:34.965873 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:15:34.976481 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:15:34.986993 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:15:35.044985 ignition[702]: Ignition 2.20.0 Jan 30 13:15:35.045001 ignition[702]: Stage: fetch-offline Jan 30 13:15:35.045036 ignition[702]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:35.045046 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:35.045151 ignition[702]: parsed url from cmdline: "" Jan 30 13:15:35.045156 ignition[702]: no config URL provided Jan 30 13:15:35.045163 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:15:35.049594 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:15:35.045174 ignition[702]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:15:35.045220 ignition[702]: op(1): [started] loading QEMU firmware config module Jan 30 13:15:35.045227 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:15:35.052838 ignition[702]: op(1): [finished] loading QEMU firmware config module Jan 30 13:15:35.053225 ignition[702]: parsing config with SHA512: c6d2957fd8b01dd4911ab9d4e52a53f58a13e817f4b21396ba6100dfbaaf5ae5d0fbe261a741e44869ef118633e3056a5f5b171e25b27340c703dd13000faf75 Jan 30 13:15:35.060453 unknown[702]: fetched base config from "system" Jan 30 13:15:35.060463 unknown[702]: fetched user config from "qemu" Jan 30 13:15:35.061982 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:15:35.064103 ignition[702]: fetch-offline: fetch-offline passed Jan 30 13:15:35.064177 ignition[702]: Ignition finished successfully Jan 30 13:15:35.068029 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:15:35.084017 systemd-networkd[784]: lo: Link UP Jan 30 13:15:35.084028 systemd-networkd[784]: lo: Gained carrier Jan 30 13:15:35.085589 systemd-networkd[784]: Enumeration completed Jan 30 13:15:35.086069 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:15:35.086075 systemd-networkd[784]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:15:35.086641 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:15:35.087282 systemd-networkd[784]: eth0: Link UP Jan 30 13:15:35.087287 systemd-networkd[784]: eth0: Gained carrier Jan 30 13:15:35.087295 systemd-networkd[784]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:15:35.095171 systemd[1]: Reached target network.target - Network. Jan 30 13:15:35.096879 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:15:35.111967 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:15:35.118894 systemd-networkd[784]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:15:35.126057 ignition[787]: Ignition 2.20.0 Jan 30 13:15:35.126068 ignition[787]: Stage: kargs Jan 30 13:15:35.126244 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:35.126254 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:35.126880 ignition[787]: kargs: kargs passed Jan 30 13:15:35.126923 ignition[787]: Ignition finished successfully Jan 30 13:15:35.133783 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:15:35.144992 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:15:35.158370 ignition[796]: Ignition 2.20.0 Jan 30 13:15:35.158380 ignition[796]: Stage: disks Jan 30 13:15:35.158533 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:35.158543 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:35.162164 ignition[796]: disks: disks passed Jan 30 13:15:35.162218 ignition[796]: Ignition finished successfully Jan 30 13:15:35.165523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:15:35.166319 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:15:35.167762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:15:35.169778 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:15:35.170103 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:15:35.173789 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:15:35.187953 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:15:35.201074 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:15:35.206799 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:15:35.208440 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:15:35.295883 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 13:15:35.296773 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:15:35.297910 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:15:35.314087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:15:35.316493 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:15:35.318319 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:15:35.323568 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Jan 30 13:15:35.318365 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:15:35.318415 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:15:35.331834 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:15:35.331880 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:15:35.331894 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:15:35.331916 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:15:35.326483 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:15:35.333943 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:15:35.349025 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:15:35.380447 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:15:35.384864 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:15:35.389531 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:15:35.394196 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:15:35.483184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:15:35.493972 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:15:35.496028 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:15:35.502875 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:15:35.521685 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:15:35.524182 ignition[926]: INFO : Ignition 2.20.0 Jan 30 13:15:35.524182 ignition[926]: INFO : Stage: mount Jan 30 13:15:35.525793 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:35.525793 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:35.525793 ignition[926]: INFO : mount: mount passed Jan 30 13:15:35.525793 ignition[926]: INFO : Ignition finished successfully Jan 30 13:15:35.527167 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:15:35.535037 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:15:35.917507 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:15:35.927128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:15:35.935300 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Jan 30 13:15:35.935326 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 13:15:35.935344 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:15:35.936868 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:15:35.939869 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:15:35.940873 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:15:35.964236 ignition[959]: INFO : Ignition 2.20.0 Jan 30 13:15:35.964236 ignition[959]: INFO : Stage: files Jan 30 13:15:35.966138 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:35.966138 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:35.966138 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:15:35.966138 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:15:35.966138 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:15:35.972781 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:15:35.972781 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:15:35.972781 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:15:35.972781 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:15:35.969624 unknown[959]: wrote ssh authorized keys file for user: core Jan 30 13:15:36.250048 systemd-networkd[784]: eth0: Gained IPv6LL Jan 30 13:15:36.458455 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:15:36.773876 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:15:36.773876 ignition[959]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 13:15:36.778035 ignition[959]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:15:36.778035 ignition[959]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:15:36.778035 ignition[959]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 13:15:36.778035 ignition[959]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:15:36.804282 ignition[959]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:15:36.809379 ignition[959]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:15:36.811153 ignition[959]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:15:36.812743 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:15:36.814522 ignition[959]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:15:36.816194 ignition[959]: INFO : files: files passed Jan 30 13:15:36.816949 ignition[959]: INFO : Ignition finished successfully Jan 30 13:15:36.820551 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:15:36.829074 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:15:36.831269 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:15:36.833767 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:15:36.833948 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:15:36.842076 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:15:36.845481 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:15:36.845481 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:15:36.849910 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:15:36.848503 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:15:36.850110 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:15:36.863007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:15:36.888256 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:15:36.888397 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:15:36.890771 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:15:36.892773 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:15:36.894858 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:15:36.904985 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:15:36.920578 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:15:36.931987 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:15:36.944099 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:15:36.945376 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:15:36.947591 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:15:36.949624 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:15:36.949736 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:15:36.951860 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:15:36.953578 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:15:36.955816 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:15:36.957802 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:15:36.959810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:15:36.961993 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:15:36.964169 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:15:36.966453 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:15:36.968540 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:15:36.970868 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:15:36.972780 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:15:36.972931 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:15:36.975195 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:15:36.976622 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:15:36.978703 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:15:36.978827 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:15:36.980969 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:15:36.981087 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:15:36.983281 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:15:36.983392 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:15:36.985414 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:15:36.987189 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:15:36.991942 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:15:36.994221 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:15:36.995978 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:15:36.998043 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:15:36.998207 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:15:37.000534 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:15:37.000676 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:15:37.002454 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:15:37.002621 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:15:37.004551 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:15:37.004702 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:15:37.015093 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:15:37.018030 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:15:37.019931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:15:37.020313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:15:37.022641 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:15:37.022941 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:15:37.026764 ignition[1013]: INFO : Ignition 2.20.0 Jan 30 13:15:37.026764 ignition[1013]: INFO : Stage: umount Jan 30 13:15:37.028447 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:15:37.028447 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:15:37.028447 ignition[1013]: INFO : umount: umount passed Jan 30 13:15:37.028447 ignition[1013]: INFO : Ignition finished successfully Jan 30 13:15:37.029165 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:15:37.029326 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:15:37.031840 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:15:37.032004 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:15:37.035568 systemd[1]: Stopped target network.target - Network. Jan 30 13:15:37.036601 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:15:37.036683 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:15:37.038647 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:15:37.038710 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:15:37.040816 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:15:37.040920 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:15:37.043102 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:15:37.043192 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:15:37.045389 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:15:37.047217 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:15:37.050337 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:15:37.052879 systemd-networkd[784]: eth0: DHCPv6 lease lost Jan 30 13:15:37.055151 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:15:37.055291 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:15:37.058066 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:15:37.058201 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:15:37.061773 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:15:37.061825 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:15:37.074942 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:15:37.076914 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:15:37.076977 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:15:37.079146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:15:37.079194 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:15:37.081577 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:15:37.081629 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:15:37.083675 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:15:37.083723 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:15:37.085990 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:15:37.096901 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:15:37.097031 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:15:37.111758 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:15:37.111981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:15:37.114341 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:15:37.114391 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:15:37.116400 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:15:37.116443 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:15:37.118519 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:15:37.118570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:15:37.120670 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:15:37.120719 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:15:37.143442 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:15:37.143499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:15:37.161063 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:15:37.163376 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:15:37.163444 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:15:37.165842 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:15:37.167099 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:15:37.170791 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:15:37.171756 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:15:37.174168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:15:37.175182 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:37.177712 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:15:37.178852 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:15:37.237903 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:15:37.238087 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:15:37.239335 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:15:37.242623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:15:37.242727 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:15:37.261053 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:15:37.269450 systemd[1]: Switching root. Jan 30 13:15:37.306788 systemd-journald[194]: Journal stopped Jan 30 13:15:38.404123 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jan 30 13:15:38.404187 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:15:38.404205 kernel: SELinux: policy capability open_perms=1 Jan 30 13:15:38.404217 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:15:38.404228 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:15:38.404239 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:15:38.404250 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:15:38.404264 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:15:38.404276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:15:38.404287 kernel: audit: type=1403 audit(1738242937.693:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:15:38.404299 systemd[1]: Successfully loaded SELinux policy in 41.339ms. Jan 30 13:15:38.404327 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.677ms. Jan 30 13:15:38.404344 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:15:38.404359 systemd[1]: Detected virtualization kvm. Jan 30 13:15:38.404372 systemd[1]: Detected architecture x86-64. Jan 30 13:15:38.404383 systemd[1]: Detected first boot. Jan 30 13:15:38.404400 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:15:38.404412 zram_generator::config[1056]: No configuration found. Jan 30 13:15:38.404427 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:15:38.404439 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:15:38.404453 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:15:38.404465 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:15:38.404478 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:15:38.404490 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:15:38.404502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:15:38.404514 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:15:38.404527 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:15:38.404539 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:15:38.404552 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:15:38.404567 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:15:38.404579 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:15:38.404592 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:15:38.404604 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:15:38.404616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:15:38.404628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:15:38.404641 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:15:38.404656 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:15:38.404670 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:15:38.404685 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:15:38.404698 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:15:38.404710 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:15:38.404722 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:15:38.404735 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:15:38.404747 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:15:38.404759 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:15:38.404773 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:15:38.404786 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:15:38.404798 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:15:38.404814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:15:38.404827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:15:38.404838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:15:38.404863 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:15:38.404876 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:15:38.404888 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:15:38.404900 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:15:38.404915 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:38.404928 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:15:38.404939 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:15:38.404951 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:15:38.404964 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:15:38.404979 systemd[1]: Reached target machines.target - Containers. Jan 30 13:15:38.404991 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:15:38.405003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:15:38.405017 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:15:38.405029 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:15:38.405041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:15:38.405053 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:15:38.405066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:15:38.405078 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:15:38.405097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:15:38.405110 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:15:38.405125 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:15:38.405137 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:15:38.405149 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:15:38.405161 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:15:38.405173 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:15:38.405185 kernel: fuse: init (API version 7.39) Jan 30 13:15:38.405197 kernel: loop: module loaded Jan 30 13:15:38.405209 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:15:38.405221 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:15:38.405235 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:15:38.405248 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:15:38.405260 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:15:38.405272 systemd[1]: Stopped verity-setup.service. Jan 30 13:15:38.405285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:38.405313 systemd-journald[1119]: Collecting audit messages is disabled. Jan 30 13:15:38.405338 kernel: ACPI: bus type drm_connector registered Jan 30 13:15:38.405351 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:15:38.405364 systemd-journald[1119]: Journal started Jan 30 13:15:38.405386 systemd-journald[1119]: Runtime Journal (/run/log/journal/09ef0fc140134f1793f2afde0d8d6a49) is 6.0M, max 48.2M, 42.2M free. Jan 30 13:15:38.190433 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:15:38.205882 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:15:38.206337 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:15:38.412976 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:15:38.413724 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:15:38.415143 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:15:38.416260 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:15:38.417474 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:15:38.418702 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:15:38.419955 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:15:38.421640 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:15:38.421816 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:15:38.423443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:15:38.423612 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:15:38.425175 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:15:38.425347 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:15:38.426977 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:15:38.427228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:15:38.428806 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:15:38.428989 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:15:38.430475 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:15:38.430645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:15:38.432048 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:15:38.433579 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:15:38.435125 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:15:38.449669 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:15:38.460965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:15:38.463274 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:15:38.464536 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:15:38.464653 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:15:38.466661 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:15:38.469049 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:15:38.471315 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:15:38.473537 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:15:38.475734 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:15:38.478547 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:15:38.479778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:15:38.483945 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:15:38.485153 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:15:38.486654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:15:38.490232 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:15:38.493059 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:15:38.496497 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:15:38.498310 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:15:38.499915 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:15:38.520941 systemd-journald[1119]: Time spent on flushing to /var/log/journal/09ef0fc140134f1793f2afde0d8d6a49 is 29.467ms for 1030 entries. Jan 30 13:15:38.520941 systemd-journald[1119]: System Journal (/var/log/journal/09ef0fc140134f1793f2afde0d8d6a49) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:15:38.610514 systemd-journald[1119]: Received client request to flush runtime journal. Jan 30 13:15:38.610718 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 13:15:38.610762 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:15:38.530997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:15:38.540602 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:15:38.542654 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:15:38.544899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:15:38.551999 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:15:38.555230 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:15:38.557425 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:15:38.563260 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:15:38.564177 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 13:15:38.564190 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jan 30 13:15:38.571044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:15:38.584113 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:15:38.614733 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:15:38.629067 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:15:38.638016 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:15:38.648882 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 13:15:38.653646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:15:38.654413 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:15:38.657611 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 30 13:15:38.657632 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jan 30 13:15:38.663383 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:15:38.687243 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 13:15:38.717872 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 13:15:38.731894 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 13:15:38.741872 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 13:15:38.754240 (sd-merge)[1201]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:15:38.754828 (sd-merge)[1201]: Merged extensions into '/usr'. Jan 30 13:15:38.758962 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:15:38.758978 systemd[1]: Reloading... Jan 30 13:15:38.820878 zram_generator::config[1227]: No configuration found. Jan 30 13:15:38.873643 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:15:38.939645 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:15:38.989018 systemd[1]: Reloading finished in 229 ms. Jan 30 13:15:39.024872 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:15:39.026527 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:15:39.046262 systemd[1]: Starting ensure-sysext.service... Jan 30 13:15:39.049018 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:15:39.055247 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:15:39.055268 systemd[1]: Reloading... Jan 30 13:15:39.096226 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:15:39.096593 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:15:39.097772 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:15:39.098608 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 30 13:15:39.098743 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Jan 30 13:15:39.102768 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:15:39.102839 systemd-tmpfiles[1265]: Skipping /boot Jan 30 13:15:39.106885 zram_generator::config[1292]: No configuration found. Jan 30 13:15:39.119120 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:15:39.119250 systemd-tmpfiles[1265]: Skipping /boot Jan 30 13:15:39.225910 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:15:39.275027 systemd[1]: Reloading finished in 219 ms. Jan 30 13:15:39.293508 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:15:39.306652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:15:39.318106 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:15:39.320728 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:15:39.323278 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:15:39.329153 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:15:39.333756 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:15:39.340940 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:15:39.344923 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.345135 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:15:39.349138 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:15:39.352920 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:15:39.358932 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:15:39.360221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:15:39.367210 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:15:39.368360 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.369781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:15:39.370078 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:15:39.371913 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Jan 30 13:15:39.372060 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:15:39.376201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:15:39.376403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:15:39.378452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:15:39.378719 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:15:39.389094 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:15:39.392670 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.393488 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:15:39.402784 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:15:39.403753 augenrules[1366]: No rules Jan 30 13:15:39.406426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:15:39.411004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:15:39.412351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:15:39.414887 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:15:39.415529 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.418555 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:15:39.421370 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:15:39.422417 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:15:39.425896 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:15:39.429998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:15:39.430182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:15:39.432079 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:15:39.444321 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:15:39.444575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:15:39.455689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:15:39.455941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:15:39.467125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:15:39.468052 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:15:39.475593 systemd[1]: Finished ensure-sysext.service. Jan 30 13:15:39.481110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.483873 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Jan 30 13:15:39.488138 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:15:39.490425 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:15:39.491877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:15:39.498009 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:15:39.502124 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:15:39.505194 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:15:39.519040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:15:39.531888 augenrules[1406]: /sbin/augenrules: No change Jan 30 13:15:39.523698 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:15:39.525192 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:15:39.525228 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:15:39.527354 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:15:39.527597 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:15:39.528607 systemd-resolved[1334]: Positive Trust Anchors: Jan 30 13:15:39.528622 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:15:39.528653 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:15:39.534190 systemd-resolved[1334]: Defaulting to hostname 'linux'. Jan 30 13:15:39.540475 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:15:39.545449 augenrules[1434]: No rules Jan 30 13:15:39.547889 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:15:39.548301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:15:39.550075 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:15:39.550382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:15:39.552171 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:15:39.552453 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:15:39.569618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:15:39.571257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:15:39.577872 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:15:39.584148 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:15:39.590767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:15:39.590917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:15:39.598929 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:15:39.601925 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 13:15:39.612115 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:15:39.612667 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:15:39.612933 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:15:39.616981 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:15:39.605450 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:15:39.636929 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:15:39.642507 systemd-networkd[1416]: lo: Link UP Jan 30 13:15:39.642521 systemd-networkd[1416]: lo: Gained carrier Jan 30 13:15:39.644202 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:39.644512 systemd-networkd[1416]: Enumeration completed Jan 30 13:15:39.645175 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:15:39.645179 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:15:39.645880 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:15:39.646267 systemd-networkd[1416]: eth0: Link UP Jan 30 13:15:39.646271 systemd-networkd[1416]: eth0: Gained carrier Jan 30 13:15:39.646285 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:15:39.648423 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:15:39.650572 systemd[1]: Reached target network.target - Network. Jan 30 13:15:39.652281 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:15:39.662905 systemd-networkd[1416]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:15:39.663732 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Jan 30 13:15:41.328446 systemd-timesyncd[1423]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:15:41.328491 systemd-timesyncd[1423]: Initial clock synchronization to Thu 2025-01-30 13:15:41.328354 UTC. Jan 30 13:15:41.330898 systemd-resolved[1334]: Clock change detected. Flushing caches. Jan 30 13:15:41.334289 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:15:41.343584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:15:41.343871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:41.388368 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:15:41.415311 kernel: kvm_amd: TSC scaling supported Jan 30 13:15:41.415375 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:15:41.415394 kernel: kvm_amd: Nested Paging enabled Jan 30 13:15:41.415411 kernel: kvm_amd: LBR virtualization supported Jan 30 13:15:41.417184 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:15:41.417224 kernel: kvm_amd: Virtual GIF supported Jan 30 13:15:41.435944 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:15:41.451975 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:15:41.467472 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:15:41.480019 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:15:41.491020 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:15:41.525316 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:15:41.526906 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:15:41.528067 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:15:41.529272 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:15:41.530546 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:15:41.532020 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:15:41.533390 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:15:41.534674 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:15:41.535956 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:15:41.535991 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:15:41.536925 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:15:41.538641 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:15:41.541342 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:15:41.551543 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:15:41.553960 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:15:41.555567 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:15:41.556817 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:15:41.557845 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:15:41.558856 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:15:41.558902 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:15:41.560040 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:15:41.562206 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:15:41.565784 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:15:41.567021 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:15:41.570180 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:15:41.571491 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:15:41.575042 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:15:41.575651 jq[1473]: false Jan 30 13:15:41.583074 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:15:41.586202 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:15:41.592140 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:15:41.594022 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:15:41.594684 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:15:41.596485 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:15:41.599691 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:15:41.599827 dbus-daemon[1472]: [system] SELinux support is enabled Jan 30 13:15:41.602424 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:15:41.610160 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:15:41.613390 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:15:41.613662 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:15:41.614122 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:15:41.614374 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:15:41.620366 jq[1484]: true Jan 30 13:15:41.621111 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:15:41.621976 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:15:41.623503 update_engine[1482]: I20250130 13:15:41.623414 1482 main.cc:92] Flatcar Update Engine starting Jan 30 13:15:41.625111 update_engine[1482]: I20250130 13:15:41.625056 1482 update_check_scheduler.cc:74] Next update check in 5m42s Jan 30 13:15:41.631178 extend-filesystems[1474]: Found loop3 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found loop4 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found loop5 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found sr0 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda1 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda2 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda3 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found usr Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda4 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda6 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda7 Jan 30 13:15:41.631178 extend-filesystems[1474]: Found vda9 Jan 30 13:15:41.631178 extend-filesystems[1474]: Checking size of /dev/vda9 Jan 30 13:15:41.697685 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:15:41.697713 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1372) Jan 30 13:15:41.697728 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:15:41.637713 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:15:42.073617 extend-filesystems[1474]: Resized partition /dev/vda9 Jan 30 13:15:42.074768 jq[1494]: true Jan 30 13:15:41.637749 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:15:42.075956 extend-filesystems[1503]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:15:42.075956 extend-filesystems[1503]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:15:42.075956 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:15:42.075956 extend-filesystems[1503]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:15:42.083902 sshd_keygen[1489]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:15:41.642806 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:15:42.084252 extend-filesystems[1474]: Resized filesystem in /dev/vda9 Jan 30 13:15:41.642838 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:15:42.085729 containerd[1502]: time="2025-01-30T13:15:42.075013560Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:15:41.663547 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:15:41.682830 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:15:41.686015 systemd-logind[1481]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:15:41.686039 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:15:41.692727 systemd-logind[1481]: New seat seat0. Jan 30 13:15:41.702185 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:15:41.703668 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:15:41.738577 locksmithd[1507]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:15:41.835322 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:15:42.077276 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:15:42.077499 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:15:42.105349 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:15:42.105532 containerd[1502]: time="2025-01-30T13:15:42.105489023Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107042 containerd[1502]: time="2025-01-30T13:15:42.107001950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107042 containerd[1502]: time="2025-01-30T13:15:42.107028540Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:15:42.107042 containerd[1502]: time="2025-01-30T13:15:42.107043628Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:15:42.107260 containerd[1502]: time="2025-01-30T13:15:42.107234296Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:15:42.107260 containerd[1502]: time="2025-01-30T13:15:42.107255466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107339 containerd[1502]: time="2025-01-30T13:15:42.107322391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107361 containerd[1502]: time="2025-01-30T13:15:42.107337840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107540 containerd[1502]: time="2025-01-30T13:15:42.107521585Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107569 containerd[1502]: time="2025-01-30T13:15:42.107538997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107569 containerd[1502]: time="2025-01-30T13:15:42.107551851Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107569 containerd[1502]: time="2025-01-30T13:15:42.107561910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107676 containerd[1502]: time="2025-01-30T13:15:42.107658752Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.107932 containerd[1502]: time="2025-01-30T13:15:42.107911285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:15:42.108049 containerd[1502]: time="2025-01-30T13:15:42.108032192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:15:42.108080 containerd[1502]: time="2025-01-30T13:15:42.108047220Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:15:42.108175 containerd[1502]: time="2025-01-30T13:15:42.108156285Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:15:42.108231 containerd[1502]: time="2025-01-30T13:15:42.108215907Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:15:42.117081 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:15:42.119016 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:58480.service - OpenSSH per-connection server daemon (10.0.0.1:58480). Jan 30 13:15:42.129079 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:15:42.129298 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:15:42.132846 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:15:42.148419 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:15:42.158235 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:15:42.160934 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:15:42.162649 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:15:42.166511 containerd[1502]: time="2025-01-30T13:15:42.166463562Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:15:42.166571 containerd[1502]: time="2025-01-30T13:15:42.166530437Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:15:42.166571 containerd[1502]: time="2025-01-30T13:15:42.166546868Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:15:42.166571 containerd[1502]: time="2025-01-30T13:15:42.166561195Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:15:42.166658 containerd[1502]: time="2025-01-30T13:15:42.166574530Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:15:42.166759 containerd[1502]: time="2025-01-30T13:15:42.166734500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:15:42.166990 containerd[1502]: time="2025-01-30T13:15:42.166973057Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:15:42.167164 containerd[1502]: time="2025-01-30T13:15:42.167139519Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:15:42.167164 containerd[1502]: time="2025-01-30T13:15:42.167161030Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:15:42.167216 containerd[1502]: time="2025-01-30T13:15:42.167174245Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:15:42.167216 containerd[1502]: time="2025-01-30T13:15:42.167186357Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167251 containerd[1502]: time="2025-01-30T13:15:42.167226102Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167251 containerd[1502]: time="2025-01-30T13:15:42.167237704Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167293 containerd[1502]: time="2025-01-30T13:15:42.167249977Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167293 containerd[1502]: time="2025-01-30T13:15:42.167266197Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167293 containerd[1502]: time="2025-01-30T13:15:42.167277979Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167293 containerd[1502]: time="2025-01-30T13:15:42.167289290Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167300081Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167318054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167330327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167341458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167352689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167369 containerd[1502]: time="2025-01-30T13:15:42.167364071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167376334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167386793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167397864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167410287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167424414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167435605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167446104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167456825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167469508Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:15:42.167485 containerd[1502]: time="2025-01-30T13:15:42.167487783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167499595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167514994Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167560609Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167577801Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167588081Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167599202Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167608800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167624559Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167635780Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:15:42.167651 containerd[1502]: time="2025-01-30T13:15:42.167646570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:15:42.168684 containerd[1502]: time="2025-01-30T13:15:42.168210127Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:15:42.168684 containerd[1502]: time="2025-01-30T13:15:42.168659099Z" level=info msg="Connect containerd service" Jan 30 13:15:42.168842 containerd[1502]: time="2025-01-30T13:15:42.168697601Z" level=info msg="using legacy CRI server" Jan 30 13:15:42.168842 containerd[1502]: time="2025-01-30T13:15:42.168713852Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:15:42.168983 containerd[1502]: time="2025-01-30T13:15:42.168955235Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:15:42.169753 containerd[1502]: time="2025-01-30T13:15:42.169726060Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.169891531Z" level=info msg="Start subscribing containerd event" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.169937076Z" level=info msg="Start recovering state" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.169990236Z" level=info msg="Start event monitor" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.170016775Z" level=info msg="Start snapshots syncer" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.170024991Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.170032084Z" level=info msg="Start streaming server" Jan 30 13:15:42.170157 containerd[1502]: time="2025-01-30T13:15:42.170098639Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:15:42.170296 containerd[1502]: time="2025-01-30T13:15:42.170167127Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:15:42.170296 containerd[1502]: time="2025-01-30T13:15:42.170229755Z" level=info msg="containerd successfully booted in 0.224029s" Jan 30 13:15:42.170333 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:15:42.170352 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:15:42.172643 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:15:42.176817 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:15:42.192606 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 58480 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:42.194325 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:42.203036 systemd-logind[1481]: New session 1 of user core. Jan 30 13:15:42.204374 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:15:42.217087 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:15:42.229078 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:15:42.239076 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:15:42.242845 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:15:42.355464 systemd[1560]: Queued start job for default target default.target. Jan 30 13:15:42.366120 systemd[1560]: Created slice app.slice - User Application Slice. Jan 30 13:15:42.366145 systemd[1560]: Reached target paths.target - Paths. Jan 30 13:15:42.366159 systemd[1560]: Reached target timers.target - Timers. Jan 30 13:15:42.367671 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:15:42.379578 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:15:42.379698 systemd[1560]: Reached target sockets.target - Sockets. Jan 30 13:15:42.379717 systemd[1560]: Reached target basic.target - Basic System. Jan 30 13:15:42.379753 systemd[1560]: Reached target default.target - Main User Target. Jan 30 13:15:42.379785 systemd[1560]: Startup finished in 129ms. Jan 30 13:15:42.380160 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:15:42.382681 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:15:42.443405 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:58490.service - OpenSSH per-connection server daemon (10.0.0.1:58490). Jan 30 13:15:42.486833 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 58490 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:42.488338 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:42.492441 systemd-logind[1481]: New session 2 of user core. Jan 30 13:15:42.507999 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:15:42.561586 sshd[1573]: Connection closed by 10.0.0.1 port 58490 Jan 30 13:15:42.561945 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:42.572742 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:58490.service: Deactivated successfully. Jan 30 13:15:42.574634 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:15:42.576125 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:15:42.586984 systemd-networkd[1416]: eth0: Gained IPv6LL Jan 30 13:15:42.590187 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:58506.service - OpenSSH per-connection server daemon (10.0.0.1:58506). Jan 30 13:15:42.592421 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:15:42.595266 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:15:42.598273 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:15:42.601129 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:42.605003 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:15:42.612036 systemd-logind[1481]: Removed session 2. Jan 30 13:15:42.627744 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:15:42.629571 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:15:42.629820 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:15:42.632265 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:15:42.632957 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 58506 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:42.634597 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:42.638914 systemd-logind[1481]: New session 3 of user core. Jan 30 13:15:42.648002 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:15:42.704554 sshd[1597]: Connection closed by 10.0.0.1 port 58506 Jan 30 13:15:42.704825 sshd-session[1578]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:42.707382 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:58506.service: Deactivated successfully. Jan 30 13:15:42.709237 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:15:42.710554 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:15:42.711582 systemd-logind[1481]: Removed session 3. Jan 30 13:15:43.291222 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:43.292838 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:15:43.294161 systemd[1]: Startup finished in 697ms (kernel) + 4.990s (initrd) + 3.976s (userspace) = 9.664s. Jan 30 13:15:43.298016 (kubelet)[1606]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:15:43.304631 agetty[1554]: failed to open credentials directory Jan 30 13:15:43.305104 agetty[1553]: failed to open credentials directory Jan 30 13:15:43.686246 kubelet[1606]: E0130 13:15:43.686073 1606 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:15:43.690474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:15:43.690676 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:15:52.715695 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:36230.service - OpenSSH per-connection server daemon (10.0.0.1:36230). Jan 30 13:15:52.752899 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 36230 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:52.754321 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:52.757991 systemd-logind[1481]: New session 4 of user core. Jan 30 13:15:52.767985 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:15:52.821950 sshd[1622]: Connection closed by 10.0.0.1 port 36230 Jan 30 13:15:52.822381 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:52.834711 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:36230.service: Deactivated successfully. Jan 30 13:15:52.836490 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:15:52.837976 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:15:52.839298 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:36244.service - OpenSSH per-connection server daemon (10.0.0.1:36244). Jan 30 13:15:52.839954 systemd-logind[1481]: Removed session 4. Jan 30 13:15:52.877160 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 36244 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:52.878766 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:52.882553 systemd-logind[1481]: New session 5 of user core. Jan 30 13:15:52.892000 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:15:52.942759 sshd[1629]: Connection closed by 10.0.0.1 port 36244 Jan 30 13:15:52.943236 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:52.955135 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:36244.service: Deactivated successfully. Jan 30 13:15:52.957142 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:15:52.958461 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:15:52.959761 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:36252.service - OpenSSH per-connection server daemon (10.0.0.1:36252). Jan 30 13:15:52.960718 systemd-logind[1481]: Removed session 5. Jan 30 13:15:52.997395 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 36252 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:52.999021 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:53.003238 systemd-logind[1481]: New session 6 of user core. Jan 30 13:15:53.018024 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:15:53.071440 sshd[1636]: Connection closed by 10.0.0.1 port 36252 Jan 30 13:15:53.071860 sshd-session[1634]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:53.084490 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:36252.service: Deactivated successfully. Jan 30 13:15:53.085982 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:15:53.087479 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:15:53.088869 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:36258.service - OpenSSH per-connection server daemon (10.0.0.1:36258). Jan 30 13:15:53.089616 systemd-logind[1481]: Removed session 6. Jan 30 13:15:53.126419 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 36258 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:53.127941 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:53.131813 systemd-logind[1481]: New session 7 of user core. Jan 30 13:15:53.145000 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:15:53.382938 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:15:53.383271 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:53.399735 sudo[1644]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:53.401103 sshd[1643]: Connection closed by 10.0.0.1 port 36258 Jan 30 13:15:53.401543 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:53.414480 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:36258.service: Deactivated successfully. Jan 30 13:15:53.415993 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:15:53.417217 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:15:53.418448 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:36266.service - OpenSSH per-connection server daemon (10.0.0.1:36266). Jan 30 13:15:53.419150 systemd-logind[1481]: Removed session 7. Jan 30 13:15:53.455050 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 36266 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:53.456381 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:53.460097 systemd-logind[1481]: New session 8 of user core. Jan 30 13:15:53.471004 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:15:53.522925 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:15:53.523244 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:53.526495 sudo[1653]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:53.531966 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:15:53.532278 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:53.551190 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:15:53.577426 augenrules[1675]: No rules Jan 30 13:15:53.578236 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:15:53.578444 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:15:53.579518 sudo[1652]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:53.581000 sshd[1651]: Connection closed by 10.0.0.1 port 36266 Jan 30 13:15:53.581354 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:53.598501 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:36266.service: Deactivated successfully. Jan 30 13:15:53.600032 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:15:53.601306 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:15:53.606134 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:36274.service - OpenSSH per-connection server daemon (10.0.0.1:36274). Jan 30 13:15:53.606996 systemd-logind[1481]: Removed session 8. Jan 30 13:15:53.638198 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 36274 ssh2: RSA SHA256:fyLzhNRHt4oTAA54LJSro7hnXQ5Emhk7dfCTI/IWSjY Jan 30 13:15:53.639383 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:15:53.642849 systemd-logind[1481]: New session 9 of user core. Jan 30 13:15:53.651985 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:15:53.703111 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:15:53.703429 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:15:53.704159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:15:53.717110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:53.723103 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:15:53.745406 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:15:53.745626 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:15:53.877793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:53.882351 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:15:53.928997 kubelet[1715]: E0130 13:15:53.928474 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:15:53.934920 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:15:53.935128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:15:54.182340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:54.192171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:54.219260 systemd[1]: Reloading requested from client PID 1745 ('systemctl') (unit session-9.scope)... Jan 30 13:15:54.219276 systemd[1]: Reloading... Jan 30 13:15:54.307037 zram_generator::config[1789]: No configuration found. Jan 30 13:15:55.538023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:15:55.618617 systemd[1]: Reloading finished in 1398 ms. Jan 30 13:15:55.672528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:55.675935 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:55.677783 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:15:55.678064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:55.687348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:15:55.844734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:15:55.856216 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:15:55.892001 kubelet[1833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:55.892001 kubelet[1833]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:15:55.892001 kubelet[1833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:15:55.892375 kubelet[1833]: I0130 13:15:55.892064 1833 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:15:56.187496 kubelet[1833]: I0130 13:15:56.187322 1833 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:15:56.187496 kubelet[1833]: I0130 13:15:56.187367 1833 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:15:56.187730 kubelet[1833]: I0130 13:15:56.187694 1833 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:15:56.234238 kubelet[1833]: I0130 13:15:56.234193 1833 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:15:56.254412 kubelet[1833]: E0130 13:15:56.254339 1833 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:15:56.254412 kubelet[1833]: I0130 13:15:56.254396 1833 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:15:56.262167 kubelet[1833]: I0130 13:15:56.262102 1833 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:15:56.262517 kubelet[1833]: I0130 13:15:56.262459 1833 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:15:56.262785 kubelet[1833]: I0130 13:15:56.262505 1833 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.151","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:15:56.262785 kubelet[1833]: I0130 13:15:56.262771 1833 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:15:56.262785 kubelet[1833]: I0130 13:15:56.262786 1833 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:15:56.264638 kubelet[1833]: I0130 13:15:56.264523 1833 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:56.269458 kubelet[1833]: I0130 13:15:56.269389 1833 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:15:56.269458 kubelet[1833]: I0130 13:15:56.269431 1833 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:15:56.269458 kubelet[1833]: I0130 13:15:56.269462 1833 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:15:56.269458 kubelet[1833]: I0130 13:15:56.269478 1833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:15:56.269720 kubelet[1833]: E0130 13:15:56.269627 1833 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:56.269720 kubelet[1833]: E0130 13:15:56.269708 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:56.274540 kubelet[1833]: I0130 13:15:56.274481 1833 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:15:56.275461 kubelet[1833]: I0130 13:15:56.275347 1833 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:15:56.275566 kubelet[1833]: W0130 13:15:56.275530 1833 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:15:56.279650 kubelet[1833]: I0130 13:15:56.279591 1833 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:15:56.279650 kubelet[1833]: I0130 13:15:56.279653 1833 server.go:1287] "Started kubelet" Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.280056 1833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.280598 1833 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.280656 1833 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.281529 1833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.281721 1833 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.281818 1833 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:15:56.283399 kubelet[1833]: I0130 13:15:56.283295 1833 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:15:56.286098 kubelet[1833]: I0130 13:15:56.285592 1833 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:15:56.286098 kubelet[1833]: I0130 13:15:56.285741 1833 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:15:56.291958 kubelet[1833]: W0130 13:15:56.286511 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:15:56.291958 kubelet[1833]: E0130 13:15:56.286577 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:15:56.291958 kubelet[1833]: E0130 13:15:56.287844 1833 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:15:56.291958 kubelet[1833]: E0130 13:15:56.288158 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.291958 kubelet[1833]: I0130 13:15:56.288732 1833 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:15:56.291958 kubelet[1833]: I0130 13:15:56.288839 1833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:15:56.291958 kubelet[1833]: W0130 13:15:56.289888 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.151" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:15:56.291958 kubelet[1833]: E0130 13:15:56.289933 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.151\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:15:56.293047 kubelet[1833]: I0130 13:15:56.292820 1833 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:15:56.312668 kubelet[1833]: I0130 13:15:56.312086 1833 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:15:56.312668 kubelet[1833]: I0130 13:15:56.312121 1833 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:15:56.312668 kubelet[1833]: I0130 13:15:56.312154 1833 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:15:56.317065 kubelet[1833]: E0130 13:15:56.313123 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181f7ac2565f1eff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-30 13:15:56.279619327 +0000 UTC m=+0.419471961,LastTimestamp:2025-01-30 13:15:56.279619327 +0000 UTC m=+0.419471961,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 30 13:15:56.319632 kubelet[1833]: W0130 13:15:56.319583 1833 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:15:56.319929 kubelet[1833]: E0130 13:15:56.319638 1833 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 30 13:15:56.319929 kubelet[1833]: E0130 13:15:56.319775 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181f7ac256dc65af default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-30 13:15:56.287829423 +0000 UTC m=+0.427682058,LastTimestamp:2025-01-30 13:15:56.287829423 +0000 UTC m=+0.427682058,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 30 13:15:56.320505 kubelet[1833]: E0130 13:15:56.320464 1833 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.151\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:15:56.336826 kubelet[1833]: E0130 13:15:56.336179 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181f7ac25836c0a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.151 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-30 13:15:56.310528162 +0000 UTC m=+0.450380796,LastTimestamp:2025-01-30 13:15:56.310528162 +0000 UTC m=+0.450380796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 30 13:15:56.351898 kubelet[1833]: E0130 13:15:56.351741 1833 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.151.181f7ac25836da44 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.151,UID:10.0.0.151,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.151 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.151,},FirstTimestamp:2025-01-30 13:15:56.310534724 +0000 UTC m=+0.450387358,LastTimestamp:2025-01-30 13:15:56.310534724 +0000 UTC m=+0.450387358,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.151,}" Jan 30 13:15:56.388702 kubelet[1833]: E0130 13:15:56.388631 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.488938 kubelet[1833]: E0130 13:15:56.488747 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.589914 kubelet[1833]: E0130 13:15:56.589819 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.686106 kubelet[1833]: E0130 13:15:56.686031 1833 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.151\" not found" node="10.0.0.151" Jan 30 13:15:56.690427 kubelet[1833]: E0130 13:15:56.690357 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.790624 kubelet[1833]: E0130 13:15:56.790454 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:56.823772 kubelet[1833]: I0130 13:15:56.823720 1833 policy_none.go:49] "None policy: Start" Jan 30 13:15:56.823772 kubelet[1833]: I0130 13:15:56.823763 1833 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:15:56.823772 kubelet[1833]: I0130 13:15:56.823777 1833 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:15:56.832858 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:15:56.846605 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:15:56.850180 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:15:56.857088 kubelet[1833]: I0130 13:15:56.857021 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:15:56.858013 kubelet[1833]: I0130 13:15:56.857993 1833 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:15:56.858641 kubelet[1833]: I0130 13:15:56.858262 1833 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:15:56.858641 kubelet[1833]: I0130 13:15:56.858277 1833 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:15:56.858641 kubelet[1833]: I0130 13:15:56.858544 1833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:15:56.858741 kubelet[1833]: I0130 13:15:56.858707 1833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:15:56.858765 kubelet[1833]: I0130 13:15:56.858756 1833 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:15:56.858799 kubelet[1833]: I0130 13:15:56.858782 1833 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:15:56.858799 kubelet[1833]: I0130 13:15:56.858793 1833 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:15:56.859019 kubelet[1833]: E0130 13:15:56.858987 1833 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 13:15:56.860156 kubelet[1833]: E0130 13:15:56.860124 1833 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:15:56.860228 kubelet[1833]: E0130 13:15:56.860200 1833 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.151\" not found" Jan 30 13:15:56.960051 kubelet[1833]: I0130 13:15:56.960020 1833 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.151" Jan 30 13:15:56.965444 kubelet[1833]: I0130 13:15:56.965396 1833 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.151" Jan 30 13:15:56.965444 kubelet[1833]: E0130 13:15:56.965437 1833 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.151\": node \"10.0.0.151\" not found" Jan 30 13:15:56.969561 kubelet[1833]: E0130 13:15:56.969534 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:57.070779 kubelet[1833]: E0130 13:15:57.070614 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:57.171552 kubelet[1833]: E0130 13:15:57.171493 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:57.189994 kubelet[1833]: I0130 13:15:57.189947 1833 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:15:57.190154 kubelet[1833]: W0130 13:15:57.190130 1833 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:15:57.270301 kubelet[1833]: E0130 13:15:57.270247 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:57.272480 kubelet[1833]: E0130 13:15:57.272444 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:57.337082 sudo[1686]: pam_unix(sudo:session): session closed for user root Jan 30 13:15:57.338572 sshd[1685]: Connection closed by 10.0.0.1 port 36274 Jan 30 13:15:57.338987 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Jan 30 13:15:57.343052 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:36274.service: Deactivated successfully. Jan 30 13:15:57.344971 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:15:57.345705 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:15:57.346606 systemd-logind[1481]: Removed session 9. Jan 30 13:15:57.373521 kubelet[1833]: E0130 13:15:57.373436 1833 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.151\" not found" Jan 30 13:15:57.474736 kubelet[1833]: I0130 13:15:57.474696 1833 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:15:57.474980 containerd[1502]: time="2025-01-30T13:15:57.474946632Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:15:57.475378 kubelet[1833]: I0130 13:15:57.475107 1833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:15:58.271272 kubelet[1833]: E0130 13:15:58.271214 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:58.271272 kubelet[1833]: I0130 13:15:58.271238 1833 apiserver.go:52] "Watching apiserver" Jan 30 13:15:58.280262 systemd[1]: Created slice kubepods-burstable-pod7cf339f9_7212_4de3_a030_6a1b4749162b.slice - libcontainer container kubepods-burstable-pod7cf339f9_7212_4de3_a030_6a1b4749162b.slice. Jan 30 13:15:58.287319 kubelet[1833]: I0130 13:15:58.287271 1833 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:15:58.292158 systemd[1]: Created slice kubepods-besteffort-podd4110a66_74b2_405c_b538_1dc4fdbbc7a2.slice - libcontainer container kubepods-besteffort-podd4110a66_74b2_405c_b538_1dc4fdbbc7a2.slice. Jan 30 13:15:58.296606 kubelet[1833]: I0130 13:15:58.296565 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cf339f9-7212-4de3-a030-6a1b4749162b-clustermesh-secrets\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296606 kubelet[1833]: I0130 13:15:58.296611 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-config-path\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296761 kubelet[1833]: I0130 13:15:58.296649 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-net\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296761 kubelet[1833]: I0130 13:15:58.296673 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4110a66-74b2-405c-b538-1dc4fdbbc7a2-kube-proxy\") pod \"kube-proxy-45fcx\" (UID: \"d4110a66-74b2-405c-b538-1dc4fdbbc7a2\") " pod="kube-system/kube-proxy-45fcx" Jan 30 13:15:58.296761 kubelet[1833]: I0130 13:15:58.296696 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4110a66-74b2-405c-b538-1dc4fdbbc7a2-lib-modules\") pod \"kube-proxy-45fcx\" (UID: \"d4110a66-74b2-405c-b538-1dc4fdbbc7a2\") " pod="kube-system/kube-proxy-45fcx" Jan 30 13:15:58.296761 kubelet[1833]: I0130 13:15:58.296715 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-cgroup\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296761 kubelet[1833]: I0130 13:15:58.296736 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cni-path\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296899 kubelet[1833]: I0130 13:15:58.296773 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-lib-modules\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296899 kubelet[1833]: I0130 13:15:58.296834 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vb9\" (UniqueName: \"kubernetes.io/projected/d4110a66-74b2-405c-b538-1dc4fdbbc7a2-kube-api-access-78vb9\") pod \"kube-proxy-45fcx\" (UID: \"d4110a66-74b2-405c-b538-1dc4fdbbc7a2\") " pod="kube-system/kube-proxy-45fcx" Jan 30 13:15:58.296955 kubelet[1833]: I0130 13:15:58.296869 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-hostproc\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.296955 kubelet[1833]: I0130 13:15:58.296944 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-etc-cni-netd\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297006 kubelet[1833]: I0130 13:15:58.296960 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-xtables-lock\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297006 kubelet[1833]: I0130 13:15:58.296977 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-hubble-tls\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297049 kubelet[1833]: I0130 13:15:58.297008 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-bpf-maps\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297049 kubelet[1833]: I0130 13:15:58.297022 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-kernel\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297049 kubelet[1833]: I0130 13:15:58.297038 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4tg\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-kube-api-access-7c4tg\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.297123 kubelet[1833]: I0130 13:15:58.297053 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4110a66-74b2-405c-b538-1dc4fdbbc7a2-xtables-lock\") pod \"kube-proxy-45fcx\" (UID: \"d4110a66-74b2-405c-b538-1dc4fdbbc7a2\") " pod="kube-system/kube-proxy-45fcx" Jan 30 13:15:58.297123 kubelet[1833]: I0130 13:15:58.297109 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-run\") pod \"cilium-zvwmb\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " pod="kube-system/cilium-zvwmb" Jan 30 13:15:58.590177 kubelet[1833]: E0130 13:15:58.590043 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:58.590788 containerd[1502]: time="2025-01-30T13:15:58.590711770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvwmb,Uid:7cf339f9-7212-4de3-a030-6a1b4749162b,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:58.599187 kubelet[1833]: E0130 13:15:58.599148 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:58.599672 containerd[1502]: time="2025-01-30T13:15:58.599633371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45fcx,Uid:d4110a66-74b2-405c-b538-1dc4fdbbc7a2,Namespace:kube-system,Attempt:0,}" Jan 30 13:15:59.081066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1618486176.mount: Deactivated successfully. Jan 30 13:15:59.091384 containerd[1502]: time="2025-01-30T13:15:59.091316533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:59.095350 containerd[1502]: time="2025-01-30T13:15:59.095314851Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:15:59.096354 containerd[1502]: time="2025-01-30T13:15:59.096316169Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:59.097350 containerd[1502]: time="2025-01-30T13:15:59.097289955Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:59.098032 containerd[1502]: time="2025-01-30T13:15:59.097996330Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:15:59.100517 containerd[1502]: time="2025-01-30T13:15:59.100487051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:15:59.101403 containerd[1502]: time="2025-01-30T13:15:59.101378422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 501.640194ms" Jan 30 13:15:59.103498 containerd[1502]: time="2025-01-30T13:15:59.103466278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.629103ms" Jan 30 13:15:59.204380 containerd[1502]: time="2025-01-30T13:15:59.204280210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:59.204380 containerd[1502]: time="2025-01-30T13:15:59.204336525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:59.204380 containerd[1502]: time="2025-01-30T13:15:59.204351564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:59.204559 containerd[1502]: time="2025-01-30T13:15:59.204427576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:59.206705 containerd[1502]: time="2025-01-30T13:15:59.206434730Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:15:59.206705 containerd[1502]: time="2025-01-30T13:15:59.206515842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:15:59.206705 containerd[1502]: time="2025-01-30T13:15:59.206532193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:59.206901 containerd[1502]: time="2025-01-30T13:15:59.206691872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:15:59.271662 kubelet[1833]: E0130 13:15:59.271614 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:59.289032 systemd[1]: Started cri-containerd-e522388b119ff0080f28c2202348959421f5c56e4b9387fadaf2d81eba8fa06e.scope - libcontainer container e522388b119ff0080f28c2202348959421f5c56e4b9387fadaf2d81eba8fa06e. Jan 30 13:15:59.290805 systemd[1]: Started cri-containerd-f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1.scope - libcontainer container f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1. Jan 30 13:15:59.315943 containerd[1502]: time="2025-01-30T13:15:59.315861965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-45fcx,Uid:d4110a66-74b2-405c-b538-1dc4fdbbc7a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e522388b119ff0080f28c2202348959421f5c56e4b9387fadaf2d81eba8fa06e\"" Jan 30 13:15:59.316062 containerd[1502]: time="2025-01-30T13:15:59.316001747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvwmb,Uid:7cf339f9-7212-4de3-a030-6a1b4749162b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\"" Jan 30 13:15:59.316806 kubelet[1833]: E0130 13:15:59.316775 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:59.316806 kubelet[1833]: E0130 13:15:59.316796 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:59.317753 containerd[1502]: time="2025-01-30T13:15:59.317729958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:16:00.272784 kubelet[1833]: E0130 13:16:00.272738 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:01.270390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378299463.mount: Deactivated successfully. Jan 30 13:16:01.273534 kubelet[1833]: E0130 13:16:01.273507 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:01.524090 containerd[1502]: time="2025-01-30T13:16:01.523963304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:01.524691 containerd[1502]: time="2025-01-30T13:16:01.524657916Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:16:01.525626 containerd[1502]: time="2025-01-30T13:16:01.525598550Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:01.527515 containerd[1502]: time="2025-01-30T13:16:01.527482704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:01.528218 containerd[1502]: time="2025-01-30T13:16:01.528187766Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.210361818s" Jan 30 13:16:01.528252 containerd[1502]: time="2025-01-30T13:16:01.528216540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:16:01.529396 containerd[1502]: time="2025-01-30T13:16:01.529365705Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:16:01.530116 containerd[1502]: time="2025-01-30T13:16:01.530091085Z" level=info msg="CreateContainer within sandbox \"e522388b119ff0080f28c2202348959421f5c56e4b9387fadaf2d81eba8fa06e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:16:01.544759 containerd[1502]: time="2025-01-30T13:16:01.544716463Z" level=info msg="CreateContainer within sandbox \"e522388b119ff0080f28c2202348959421f5c56e4b9387fadaf2d81eba8fa06e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ce342b530e9f772f9978cc6c4c72e3ebaebd7117328e49595832ac4dac9315d6\"" Jan 30 13:16:01.545183 containerd[1502]: time="2025-01-30T13:16:01.545162239Z" level=info msg="StartContainer for \"ce342b530e9f772f9978cc6c4c72e3ebaebd7117328e49595832ac4dac9315d6\"" Jan 30 13:16:01.572997 systemd[1]: Started cri-containerd-ce342b530e9f772f9978cc6c4c72e3ebaebd7117328e49595832ac4dac9315d6.scope - libcontainer container ce342b530e9f772f9978cc6c4c72e3ebaebd7117328e49595832ac4dac9315d6. Jan 30 13:16:01.601273 containerd[1502]: time="2025-01-30T13:16:01.601223965Z" level=info msg="StartContainer for \"ce342b530e9f772f9978cc6c4c72e3ebaebd7117328e49595832ac4dac9315d6\" returns successfully" Jan 30 13:16:01.868840 kubelet[1833]: E0130 13:16:01.868740 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:01.876957 kubelet[1833]: I0130 13:16:01.876910 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-45fcx" podStartSLOduration=3.665340583 podStartE2EDuration="5.876888444s" podCreationTimestamp="2025-01-30 13:15:56 +0000 UTC" firstStartedPulling="2025-01-30 13:15:59.317316323 +0000 UTC m=+3.457168957" lastFinishedPulling="2025-01-30 13:16:01.528864184 +0000 UTC m=+5.668716818" observedRunningTime="2025-01-30 13:16:01.876585336 +0000 UTC m=+6.016437970" watchObservedRunningTime="2025-01-30 13:16:01.876888444 +0000 UTC m=+6.016741078" Jan 30 13:16:02.274172 kubelet[1833]: E0130 13:16:02.274053 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:02.869355 kubelet[1833]: E0130 13:16:02.869319 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:03.274927 kubelet[1833]: E0130 13:16:03.274786 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:04.275749 kubelet[1833]: E0130 13:16:04.275707 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:05.276396 kubelet[1833]: E0130 13:16:05.276356 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:05.440635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398193339.mount: Deactivated successfully. Jan 30 13:16:06.277391 kubelet[1833]: E0130 13:16:06.277350 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:07.277781 kubelet[1833]: E0130 13:16:07.277740 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:08.157795 containerd[1502]: time="2025-01-30T13:16:08.157744020Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:08.158619 containerd[1502]: time="2025-01-30T13:16:08.158557436Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jan 30 13:16:08.159858 containerd[1502]: time="2025-01-30T13:16:08.159833749Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:08.161266 containerd[1502]: time="2025-01-30T13:16:08.161235548Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 6.631837252s" Jan 30 13:16:08.161266 containerd[1502]: time="2025-01-30T13:16:08.161263771Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jan 30 13:16:08.163089 containerd[1502]: time="2025-01-30T13:16:08.163052385Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:16:08.176653 containerd[1502]: time="2025-01-30T13:16:08.176589392Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\"" Jan 30 13:16:08.177384 containerd[1502]: time="2025-01-30T13:16:08.177323148Z" level=info msg="StartContainer for \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\"" Jan 30 13:16:08.199468 systemd[1]: run-containerd-runc-k8s.io-bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799-runc.bTsYKw.mount: Deactivated successfully. Jan 30 13:16:08.211030 systemd[1]: Started cri-containerd-bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799.scope - libcontainer container bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799. Jan 30 13:16:08.237481 containerd[1502]: time="2025-01-30T13:16:08.237439568Z" level=info msg="StartContainer for \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\" returns successfully" Jan 30 13:16:08.248248 systemd[1]: cri-containerd-bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799.scope: Deactivated successfully. Jan 30 13:16:08.278092 kubelet[1833]: E0130 13:16:08.278050 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:08.735969 containerd[1502]: time="2025-01-30T13:16:08.735913565Z" level=info msg="shim disconnected" id=bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799 namespace=k8s.io Jan 30 13:16:08.735969 containerd[1502]: time="2025-01-30T13:16:08.735966915Z" level=warning msg="cleaning up after shim disconnected" id=bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799 namespace=k8s.io Jan 30 13:16:08.736136 containerd[1502]: time="2025-01-30T13:16:08.735978847Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:08.878597 kubelet[1833]: E0130 13:16:08.878569 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:08.879976 containerd[1502]: time="2025-01-30T13:16:08.879928284Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:16:08.894924 containerd[1502]: time="2025-01-30T13:16:08.894889501Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\"" Jan 30 13:16:08.895298 containerd[1502]: time="2025-01-30T13:16:08.895276677Z" level=info msg="StartContainer for \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\"" Jan 30 13:16:08.923007 systemd[1]: Started cri-containerd-2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547.scope - libcontainer container 2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547. Jan 30 13:16:08.946536 containerd[1502]: time="2025-01-30T13:16:08.946497655Z" level=info msg="StartContainer for \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\" returns successfully" Jan 30 13:16:08.958173 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:16:08.958665 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:16:08.958751 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:16:08.964685 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:16:08.965296 systemd[1]: cri-containerd-2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547.scope: Deactivated successfully. Jan 30 13:16:08.978627 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:16:08.996546 containerd[1502]: time="2025-01-30T13:16:08.996413204Z" level=info msg="shim disconnected" id=2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547 namespace=k8s.io Jan 30 13:16:08.996546 containerd[1502]: time="2025-01-30T13:16:08.996471985Z" level=warning msg="cleaning up after shim disconnected" id=2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547 namespace=k8s.io Jan 30 13:16:08.996546 containerd[1502]: time="2025-01-30T13:16:08.996481753Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:09.171197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799-rootfs.mount: Deactivated successfully. Jan 30 13:16:09.278315 kubelet[1833]: E0130 13:16:09.278235 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:09.882133 kubelet[1833]: E0130 13:16:09.882089 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:09.884074 containerd[1502]: time="2025-01-30T13:16:09.884004856Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:16:09.900805 containerd[1502]: time="2025-01-30T13:16:09.900757483Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\"" Jan 30 13:16:09.901200 containerd[1502]: time="2025-01-30T13:16:09.901173483Z" level=info msg="StartContainer for \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\"" Jan 30 13:16:09.932011 systemd[1]: Started cri-containerd-adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303.scope - libcontainer container adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303. Jan 30 13:16:09.960265 containerd[1502]: time="2025-01-30T13:16:09.960224024Z" level=info msg="StartContainer for \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\" returns successfully" Jan 30 13:16:09.961202 systemd[1]: cri-containerd-adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303.scope: Deactivated successfully. Jan 30 13:16:09.982871 containerd[1502]: time="2025-01-30T13:16:09.982818567Z" level=info msg="shim disconnected" id=adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303 namespace=k8s.io Jan 30 13:16:09.982871 containerd[1502]: time="2025-01-30T13:16:09.982863721Z" level=warning msg="cleaning up after shim disconnected" id=adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303 namespace=k8s.io Jan 30 13:16:09.982871 containerd[1502]: time="2025-01-30T13:16:09.982890241Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:10.170707 systemd[1]: run-containerd-runc-k8s.io-adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303-runc.HYLZOj.mount: Deactivated successfully. Jan 30 13:16:10.170811 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303-rootfs.mount: Deactivated successfully. Jan 30 13:16:10.278687 kubelet[1833]: E0130 13:16:10.278648 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:10.884926 kubelet[1833]: E0130 13:16:10.884867 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:10.888222 containerd[1502]: time="2025-01-30T13:16:10.888061881Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:16:10.907801 containerd[1502]: time="2025-01-30T13:16:10.907760433Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\"" Jan 30 13:16:10.908231 containerd[1502]: time="2025-01-30T13:16:10.908207271Z" level=info msg="StartContainer for \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\"" Jan 30 13:16:10.933998 systemd[1]: Started cri-containerd-bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d.scope - libcontainer container bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d. Jan 30 13:16:10.956040 systemd[1]: cri-containerd-bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d.scope: Deactivated successfully. Jan 30 13:16:10.957674 containerd[1502]: time="2025-01-30T13:16:10.957638031Z" level=info msg="StartContainer for \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\" returns successfully" Jan 30 13:16:10.979930 containerd[1502]: time="2025-01-30T13:16:10.979858382Z" level=info msg="shim disconnected" id=bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d namespace=k8s.io Jan 30 13:16:10.979930 containerd[1502]: time="2025-01-30T13:16:10.979925859Z" level=warning msg="cleaning up after shim disconnected" id=bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d namespace=k8s.io Jan 30 13:16:10.979930 containerd[1502]: time="2025-01-30T13:16:10.979936308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:11.279620 kubelet[1833]: E0130 13:16:11.279492 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:11.887841 kubelet[1833]: E0130 13:16:11.887817 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:11.889327 containerd[1502]: time="2025-01-30T13:16:11.889297305Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:16:11.905264 containerd[1502]: time="2025-01-30T13:16:11.905219805Z" level=info msg="CreateContainer within sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\"" Jan 30 13:16:11.905640 containerd[1502]: time="2025-01-30T13:16:11.905578808Z" level=info msg="StartContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\"" Jan 30 13:16:11.933049 systemd[1]: Started cri-containerd-90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91.scope - libcontainer container 90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91. Jan 30 13:16:11.959919 containerd[1502]: time="2025-01-30T13:16:11.959886344Z" level=info msg="StartContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" returns successfully" Jan 30 13:16:12.115676 kubelet[1833]: I0130 13:16:12.115628 1833 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:16:12.280047 kubelet[1833]: E0130 13:16:12.279919 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:12.426909 kernel: Initializing XFRM netlink socket Jan 30 13:16:12.892046 kubelet[1833]: E0130 13:16:12.892010 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:12.904345 kubelet[1833]: I0130 13:16:12.904285 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zvwmb" podStartSLOduration=8.059905009 podStartE2EDuration="16.904269249s" podCreationTimestamp="2025-01-30 13:15:56 +0000 UTC" firstStartedPulling="2025-01-30 13:15:59.317644578 +0000 UTC m=+3.457497212" lastFinishedPulling="2025-01-30 13:16:08.162008818 +0000 UTC m=+12.301861452" observedRunningTime="2025-01-30 13:16:12.904084002 +0000 UTC m=+17.043936656" watchObservedRunningTime="2025-01-30 13:16:12.904269249 +0000 UTC m=+17.044121883" Jan 30 13:16:13.280706 kubelet[1833]: E0130 13:16:13.280560 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:13.394757 kubelet[1833]: I0130 13:16:13.394698 1833 status_manager.go:890] "Failed to get status for pod" podUID="a5f02211-80ff-4b35-96b9-8800c90ef77e" pod="default/nginx-deployment-7fcdb87857-4hhkv" err="pods \"nginx-deployment-7fcdb87857-4hhkv\" is forbidden: User \"system:node:10.0.0.151\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node '10.0.0.151' and this object" Jan 30 13:16:13.394757 kubelet[1833]: W0130 13:16:13.394743 1833 reflector.go:569] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.151" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.151' and this object Jan 30 13:16:13.394937 kubelet[1833]: E0130 13:16:13.394775 1833 reflector.go:166] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:10.0.0.151\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node '10.0.0.151' and this object" logger="UnhandledError" Jan 30 13:16:13.398026 systemd[1]: Created slice kubepods-besteffort-poda5f02211_80ff_4b35_96b9_8800c90ef77e.slice - libcontainer container kubepods-besteffort-poda5f02211_80ff_4b35_96b9_8800c90ef77e.slice. Jan 30 13:16:13.490163 kubelet[1833]: I0130 13:16:13.490083 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6mv2\" (UniqueName: \"kubernetes.io/projected/a5f02211-80ff-4b35-96b9-8800c90ef77e-kube-api-access-s6mv2\") pod \"nginx-deployment-7fcdb87857-4hhkv\" (UID: \"a5f02211-80ff-4b35-96b9-8800c90ef77e\") " pod="default/nginx-deployment-7fcdb87857-4hhkv" Jan 30 13:16:13.893105 kubelet[1833]: E0130 13:16:13.893066 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:14.119701 systemd-networkd[1416]: cilium_host: Link UP Jan 30 13:16:14.119932 systemd-networkd[1416]: cilium_net: Link UP Jan 30 13:16:14.120681 systemd-networkd[1416]: cilium_net: Gained carrier Jan 30 13:16:14.121128 systemd-networkd[1416]: cilium_host: Gained carrier Jan 30 13:16:14.121345 systemd-networkd[1416]: cilium_net: Gained IPv6LL Jan 30 13:16:14.121786 systemd-networkd[1416]: cilium_host: Gained IPv6LL Jan 30 13:16:14.222710 systemd-networkd[1416]: cilium_vxlan: Link UP Jan 30 13:16:14.222720 systemd-networkd[1416]: cilium_vxlan: Gained carrier Jan 30 13:16:14.281677 kubelet[1833]: E0130 13:16:14.281606 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:14.439923 kernel: NET: Registered PF_ALG protocol family Jan 30 13:16:14.600970 containerd[1502]: time="2025-01-30T13:16:14.600841960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4hhkv,Uid:a5f02211-80ff-4b35-96b9-8800c90ef77e,Namespace:default,Attempt:0,}" Jan 30 13:16:14.894400 kubelet[1833]: E0130 13:16:14.894303 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:15.054688 systemd-networkd[1416]: lxc_health: Link UP Jan 30 13:16:15.067047 systemd-networkd[1416]: lxc_health: Gained carrier Jan 30 13:16:15.282097 kubelet[1833]: E0130 13:16:15.281958 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:15.429169 systemd-networkd[1416]: lxcc2f51d755b15: Link UP Jan 30 13:16:15.439902 kernel: eth0: renamed from tmpaeda3 Jan 30 13:16:15.449266 systemd-networkd[1416]: lxcc2f51d755b15: Gained carrier Jan 30 13:16:15.930102 systemd-networkd[1416]: cilium_vxlan: Gained IPv6LL Jan 30 13:16:16.269806 kubelet[1833]: E0130 13:16:16.269690 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:16.282113 kubelet[1833]: E0130 13:16:16.282083 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:16.591926 kubelet[1833]: E0130 13:16:16.591891 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:16.762149 systemd-networkd[1416]: lxc_health: Gained IPv6LL Jan 30 13:16:17.282800 kubelet[1833]: E0130 13:16:17.282742 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:17.466019 systemd-networkd[1416]: lxcc2f51d755b15: Gained IPv6LL Jan 30 13:16:18.283276 kubelet[1833]: E0130 13:16:18.283196 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:18.672361 containerd[1502]: time="2025-01-30T13:16:18.671819561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:18.672361 containerd[1502]: time="2025-01-30T13:16:18.672325819Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:18.672361 containerd[1502]: time="2025-01-30T13:16:18.672337301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:18.672945 containerd[1502]: time="2025-01-30T13:16:18.672400942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:18.685937 systemd[1]: run-containerd-runc-k8s.io-aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599-runc.eXAeL5.mount: Deactivated successfully. Jan 30 13:16:18.702042 systemd[1]: Started cri-containerd-aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599.scope - libcontainer container aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599. Jan 30 13:16:18.714075 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:16:18.736125 containerd[1502]: time="2025-01-30T13:16:18.735991851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-4hhkv,Uid:a5f02211-80ff-4b35-96b9-8800c90ef77e,Namespace:default,Attempt:0,} returns sandbox id \"aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599\"" Jan 30 13:16:18.737076 containerd[1502]: time="2025-01-30T13:16:18.737058440Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:16:19.108036 kubelet[1833]: I0130 13:16:19.107982 1833 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:16:19.108412 kubelet[1833]: E0130 13:16:19.108391 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:19.283638 kubelet[1833]: E0130 13:16:19.283586 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:19.901959 kubelet[1833]: E0130 13:16:19.901926 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:20.284263 kubelet[1833]: E0130 13:16:20.284223 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:21.224167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216194858.mount: Deactivated successfully. Jan 30 13:16:21.285242 kubelet[1833]: E0130 13:16:21.285180 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:22.285850 kubelet[1833]: E0130 13:16:22.285802 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:22.311341 containerd[1502]: time="2025-01-30T13:16:22.311272495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:22.312002 containerd[1502]: time="2025-01-30T13:16:22.311941358Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 30 13:16:22.313046 containerd[1502]: time="2025-01-30T13:16:22.313014622Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:22.315745 containerd[1502]: time="2025-01-30T13:16:22.315709573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:22.316497 containerd[1502]: time="2025-01-30T13:16:22.316464501Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 3.579361114s" Jan 30 13:16:22.316497 containerd[1502]: time="2025-01-30T13:16:22.316493696Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:16:22.318322 containerd[1502]: time="2025-01-30T13:16:22.318279426Z" level=info msg="CreateContainer within sandbox \"aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:16:22.331075 containerd[1502]: time="2025-01-30T13:16:22.331032798Z" level=info msg="CreateContainer within sandbox \"aeda3119d8f7537fbd18885033a928cdc1822d7edb72e74c1f7c4aecef484599\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"66573c856f91539e76bfdb074eebddd2b77070c2188ec36cafd008029c0cfc63\"" Jan 30 13:16:22.331573 containerd[1502]: time="2025-01-30T13:16:22.331544993Z" level=info msg="StartContainer for \"66573c856f91539e76bfdb074eebddd2b77070c2188ec36cafd008029c0cfc63\"" Jan 30 13:16:22.361999 systemd[1]: Started cri-containerd-66573c856f91539e76bfdb074eebddd2b77070c2188ec36cafd008029c0cfc63.scope - libcontainer container 66573c856f91539e76bfdb074eebddd2b77070c2188ec36cafd008029c0cfc63. Jan 30 13:16:22.386251 containerd[1502]: time="2025-01-30T13:16:22.386217135Z" level=info msg="StartContainer for \"66573c856f91539e76bfdb074eebddd2b77070c2188ec36cafd008029c0cfc63\" returns successfully" Jan 30 13:16:23.286600 kubelet[1833]: E0130 13:16:23.286533 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:24.286901 kubelet[1833]: E0130 13:16:24.286838 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:24.940080 kubelet[1833]: I0130 13:16:24.940022 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-4hhkv" podStartSLOduration=8.359581576 podStartE2EDuration="11.940000506s" podCreationTimestamp="2025-01-30 13:16:13 +0000 UTC" firstStartedPulling="2025-01-30 13:16:18.736784918 +0000 UTC m=+22.876637552" lastFinishedPulling="2025-01-30 13:16:22.317203848 +0000 UTC m=+26.457056482" observedRunningTime="2025-01-30 13:16:22.918730786 +0000 UTC m=+27.058583420" watchObservedRunningTime="2025-01-30 13:16:24.940000506 +0000 UTC m=+29.079853140" Jan 30 13:16:24.946453 systemd[1]: Created slice kubepods-besteffort-podec302b5c_5ffc_4851_bf97_f0fc6548dc34.slice - libcontainer container kubepods-besteffort-podec302b5c_5ffc_4851_bf97_f0fc6548dc34.slice. Jan 30 13:16:25.059355 kubelet[1833]: I0130 13:16:25.059300 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ec302b5c-5ffc-4851-bf97-f0fc6548dc34-data\") pod \"nfs-server-provisioner-0\" (UID: \"ec302b5c-5ffc-4851-bf97-f0fc6548dc34\") " pod="default/nfs-server-provisioner-0" Jan 30 13:16:25.059355 kubelet[1833]: I0130 13:16:25.059351 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm5qj\" (UniqueName: \"kubernetes.io/projected/ec302b5c-5ffc-4851-bf97-f0fc6548dc34-kube-api-access-wm5qj\") pod \"nfs-server-provisioner-0\" (UID: \"ec302b5c-5ffc-4851-bf97-f0fc6548dc34\") " pod="default/nfs-server-provisioner-0" Jan 30 13:16:25.249641 containerd[1502]: time="2025-01-30T13:16:25.249535807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ec302b5c-5ffc-4851-bf97-f0fc6548dc34,Namespace:default,Attempt:0,}" Jan 30 13:16:25.278209 systemd-networkd[1416]: lxcf79b484e531e: Link UP Jan 30 13:16:25.287727 kubelet[1833]: E0130 13:16:25.287688 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:25.288089 kernel: eth0: renamed from tmpcf157 Jan 30 13:16:25.301497 systemd-networkd[1416]: lxcf79b484e531e: Gained carrier Jan 30 13:16:25.525053 containerd[1502]: time="2025-01-30T13:16:25.524158932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:25.525053 containerd[1502]: time="2025-01-30T13:16:25.524932340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:25.525053 containerd[1502]: time="2025-01-30T13:16:25.524947029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:25.525239 containerd[1502]: time="2025-01-30T13:16:25.525030287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:25.549007 systemd[1]: Started cri-containerd-cf157b5e49f0fb958bbf5f6bac3e323f9552ff5e32d1205c2dd7d1573f053c59.scope - libcontainer container cf157b5e49f0fb958bbf5f6bac3e323f9552ff5e32d1205c2dd7d1573f053c59. Jan 30 13:16:25.560789 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:16:25.584965 containerd[1502]: time="2025-01-30T13:16:25.584918940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ec302b5c-5ffc-4851-bf97-f0fc6548dc34,Namespace:default,Attempt:0,} returns sandbox id \"cf157b5e49f0fb958bbf5f6bac3e323f9552ff5e32d1205c2dd7d1573f053c59\"" Jan 30 13:16:25.586305 containerd[1502]: time="2025-01-30T13:16:25.586275456Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:16:26.287813 kubelet[1833]: E0130 13:16:26.287769 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:26.632029 update_engine[1482]: I20250130 13:16:26.631829 1482 update_attempter.cc:509] Updating boot flags... Jan 30 13:16:26.795902 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3037) Jan 30 13:16:26.845989 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3037) Jan 30 13:16:26.994808 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3037) Jan 30 13:16:27.288162 kubelet[1833]: E0130 13:16:27.288104 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:27.322252 systemd-networkd[1416]: lxcf79b484e531e: Gained IPv6LL Jan 30 13:16:27.909580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1757378885.mount: Deactivated successfully. Jan 30 13:16:28.288847 kubelet[1833]: E0130 13:16:28.288428 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:29.289340 kubelet[1833]: E0130 13:16:29.289279 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:30.290244 kubelet[1833]: E0130 13:16:30.290192 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:30.344072 containerd[1502]: time="2025-01-30T13:16:30.344010554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:30.344790 containerd[1502]: time="2025-01-30T13:16:30.344732871Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 30 13:16:30.345919 containerd[1502]: time="2025-01-30T13:16:30.345885803Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:30.348487 containerd[1502]: time="2025-01-30T13:16:30.348454474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:30.349391 containerd[1502]: time="2025-01-30T13:16:30.349344849Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.763030961s" Jan 30 13:16:30.349443 containerd[1502]: time="2025-01-30T13:16:30.349388702Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 30 13:16:30.351410 containerd[1502]: time="2025-01-30T13:16:30.351383346Z" level=info msg="CreateContainer within sandbox \"cf157b5e49f0fb958bbf5f6bac3e323f9552ff5e32d1205c2dd7d1573f053c59\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:16:30.363781 containerd[1502]: time="2025-01-30T13:16:30.363748827Z" level=info msg="CreateContainer within sandbox \"cf157b5e49f0fb958bbf5f6bac3e323f9552ff5e32d1205c2dd7d1573f053c59\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0a6eab5190416d26f35f36a409cfb708dd4c41d81d6fe56d3d917946b90bef0b\"" Jan 30 13:16:30.364154 containerd[1502]: time="2025-01-30T13:16:30.364126572Z" level=info msg="StartContainer for \"0a6eab5190416d26f35f36a409cfb708dd4c41d81d6fe56d3d917946b90bef0b\"" Jan 30 13:16:30.429020 systemd[1]: Started cri-containerd-0a6eab5190416d26f35f36a409cfb708dd4c41d81d6fe56d3d917946b90bef0b.scope - libcontainer container 0a6eab5190416d26f35f36a409cfb708dd4c41d81d6fe56d3d917946b90bef0b. Jan 30 13:16:30.471128 containerd[1502]: time="2025-01-30T13:16:30.471077086Z" level=info msg="StartContainer for \"0a6eab5190416d26f35f36a409cfb708dd4c41d81d6fe56d3d917946b90bef0b\" returns successfully" Jan 30 13:16:30.934307 kubelet[1833]: I0130 13:16:30.934253 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.16989032 podStartE2EDuration="6.934225732s" podCreationTimestamp="2025-01-30 13:16:24 +0000 UTC" firstStartedPulling="2025-01-30 13:16:25.585863894 +0000 UTC m=+29.725716518" lastFinishedPulling="2025-01-30 13:16:30.350199296 +0000 UTC m=+34.490051930" observedRunningTime="2025-01-30 13:16:30.934119943 +0000 UTC m=+35.073972577" watchObservedRunningTime="2025-01-30 13:16:30.934225732 +0000 UTC m=+35.074078366" Jan 30 13:16:31.290752 kubelet[1833]: E0130 13:16:31.290695 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:32.291146 kubelet[1833]: E0130 13:16:32.291105 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:33.292196 kubelet[1833]: E0130 13:16:33.292144 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:34.292691 kubelet[1833]: E0130 13:16:34.292637 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:35.293337 kubelet[1833]: E0130 13:16:35.293272 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:36.269684 kubelet[1833]: E0130 13:16:36.269597 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:36.294326 kubelet[1833]: E0130 13:16:36.294291 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:37.295480 kubelet[1833]: E0130 13:16:37.295416 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:38.295813 kubelet[1833]: E0130 13:16:38.295756 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:39.296686 kubelet[1833]: E0130 13:16:39.296645 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:39.987737 systemd[1]: Created slice kubepods-besteffort-podb5f4618d_fae4_4569_833f_842caeacca28.slice - libcontainer container kubepods-besteffort-podb5f4618d_fae4_4569_833f_842caeacca28.slice. Jan 30 13:16:40.044259 kubelet[1833]: I0130 13:16:40.044223 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x7g2\" (UniqueName: \"kubernetes.io/projected/b5f4618d-fae4-4569-833f-842caeacca28-kube-api-access-9x7g2\") pod \"test-pod-1\" (UID: \"b5f4618d-fae4-4569-833f-842caeacca28\") " pod="default/test-pod-1" Jan 30 13:16:40.044326 kubelet[1833]: I0130 13:16:40.044267 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-342a8f9e-5a0b-4c1f-856c-7eb6a4c90b5b\" (UniqueName: \"kubernetes.io/nfs/b5f4618d-fae4-4569-833f-842caeacca28-pvc-342a8f9e-5a0b-4c1f-856c-7eb6a4c90b5b\") pod \"test-pod-1\" (UID: \"b5f4618d-fae4-4569-833f-842caeacca28\") " pod="default/test-pod-1" Jan 30 13:16:40.172964 kernel: FS-Cache: Loaded Jan 30 13:16:40.238057 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:16:40.238153 kernel: RPC: Registered udp transport module. Jan 30 13:16:40.238170 kernel: RPC: Registered tcp transport module. Jan 30 13:16:40.238186 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:16:40.239552 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:16:40.297589 kubelet[1833]: E0130 13:16:40.297534 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:40.448349 kernel: NFS: Registering the id_resolver key type Jan 30 13:16:40.448409 kernel: Key type id_resolver registered Jan 30 13:16:40.448428 kernel: Key type id_legacy registered Jan 30 13:16:40.477868 nfsidmap[3233]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:16:40.482556 nfsidmap[3236]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:16:40.590511 containerd[1502]: time="2025-01-30T13:16:40.590472215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5f4618d-fae4-4569-833f-842caeacca28,Namespace:default,Attempt:0,}" Jan 30 13:16:40.618650 systemd-networkd[1416]: lxc7d9301f61f6d: Link UP Jan 30 13:16:40.629914 kernel: eth0: renamed from tmp1eb8d Jan 30 13:16:40.639465 systemd-networkd[1416]: lxc7d9301f61f6d: Gained carrier Jan 30 13:16:40.818572 containerd[1502]: time="2025-01-30T13:16:40.817853870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:40.818572 containerd[1502]: time="2025-01-30T13:16:40.818535996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:40.818572 containerd[1502]: time="2025-01-30T13:16:40.818554141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:40.818748 containerd[1502]: time="2025-01-30T13:16:40.818650832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:40.834019 systemd[1]: Started cri-containerd-1eb8d4c5c13f5e9a90f1df9f8c3e42ec969bddcecfcdf126494608a1fc7cdb21.scope - libcontainer container 1eb8d4c5c13f5e9a90f1df9f8c3e42ec969bddcecfcdf126494608a1fc7cdb21. Jan 30 13:16:40.844755 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:16:40.867227 containerd[1502]: time="2025-01-30T13:16:40.867183943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b5f4618d-fae4-4569-833f-842caeacca28,Namespace:default,Attempt:0,} returns sandbox id \"1eb8d4c5c13f5e9a90f1df9f8c3e42ec969bddcecfcdf126494608a1fc7cdb21\"" Jan 30 13:16:40.868305 containerd[1502]: time="2025-01-30T13:16:40.868267215Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:16:41.250128 containerd[1502]: time="2025-01-30T13:16:41.250013001Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:41.250736 containerd[1502]: time="2025-01-30T13:16:41.250660340Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:16:41.253626 containerd[1502]: time="2025-01-30T13:16:41.253584639Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 385.285244ms" Jan 30 13:16:41.253626 containerd[1502]: time="2025-01-30T13:16:41.253617250Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 30 13:16:41.255627 containerd[1502]: time="2025-01-30T13:16:41.255596579Z" level=info msg="CreateContainer within sandbox \"1eb8d4c5c13f5e9a90f1df9f8c3e42ec969bddcecfcdf126494608a1fc7cdb21\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:16:41.278050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2772783157.mount: Deactivated successfully. Jan 30 13:16:41.282555 containerd[1502]: time="2025-01-30T13:16:41.282509387Z" level=info msg="CreateContainer within sandbox \"1eb8d4c5c13f5e9a90f1df9f8c3e42ec969bddcecfcdf126494608a1fc7cdb21\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"314241bb2f499b6e447e6bbf9714a89516635a7ec5f027ada9be634efd6992e9\"" Jan 30 13:16:41.285128 containerd[1502]: time="2025-01-30T13:16:41.285070771Z" level=info msg="StartContainer for \"314241bb2f499b6e447e6bbf9714a89516635a7ec5f027ada9be634efd6992e9\"" Jan 30 13:16:41.298361 kubelet[1833]: E0130 13:16:41.298328 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:41.324127 systemd[1]: Started cri-containerd-314241bb2f499b6e447e6bbf9714a89516635a7ec5f027ada9be634efd6992e9.scope - libcontainer container 314241bb2f499b6e447e6bbf9714a89516635a7ec5f027ada9be634efd6992e9. Jan 30 13:16:41.351893 containerd[1502]: time="2025-01-30T13:16:41.351814149Z" level=info msg="StartContainer for \"314241bb2f499b6e447e6bbf9714a89516635a7ec5f027ada9be634efd6992e9\" returns successfully" Jan 30 13:16:41.955647 kubelet[1833]: I0130 13:16:41.955594 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.569317932 podStartE2EDuration="16.955577412s" podCreationTimestamp="2025-01-30 13:16:25 +0000 UTC" firstStartedPulling="2025-01-30 13:16:40.867991905 +0000 UTC m=+45.007844539" lastFinishedPulling="2025-01-30 13:16:41.254251385 +0000 UTC m=+45.394104019" observedRunningTime="2025-01-30 13:16:41.955483375 +0000 UTC m=+46.095336009" watchObservedRunningTime="2025-01-30 13:16:41.955577412 +0000 UTC m=+46.095430046" Jan 30 13:16:42.106015 systemd-networkd[1416]: lxc7d9301f61f6d: Gained IPv6LL Jan 30 13:16:42.299025 kubelet[1833]: E0130 13:16:42.298975 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:43.300074 kubelet[1833]: E0130 13:16:43.300020 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:44.300941 kubelet[1833]: E0130 13:16:44.300895 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:45.301761 kubelet[1833]: E0130 13:16:45.301716 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:46.302044 kubelet[1833]: E0130 13:16:46.301959 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:47.302857 kubelet[1833]: E0130 13:16:47.302781 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:47.496735 containerd[1502]: time="2025-01-30T13:16:47.496678317Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:16:47.504151 containerd[1502]: time="2025-01-30T13:16:47.504112731Z" level=info msg="StopContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" with timeout 2 (s)" Jan 30 13:16:47.504379 containerd[1502]: time="2025-01-30T13:16:47.504356660Z" level=info msg="Stop container \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" with signal terminated" Jan 30 13:16:47.510940 systemd-networkd[1416]: lxc_health: Link DOWN Jan 30 13:16:47.510958 systemd-networkd[1416]: lxc_health: Lost carrier Jan 30 13:16:47.547400 systemd[1]: cri-containerd-90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91.scope: Deactivated successfully. Jan 30 13:16:47.547735 systemd[1]: cri-containerd-90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91.scope: Consumed 6.635s CPU time. Jan 30 13:16:47.565564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91-rootfs.mount: Deactivated successfully. Jan 30 13:16:47.577827 containerd[1502]: time="2025-01-30T13:16:47.577758512Z" level=info msg="shim disconnected" id=90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91 namespace=k8s.io Jan 30 13:16:47.577827 containerd[1502]: time="2025-01-30T13:16:47.577818917Z" level=warning msg="cleaning up after shim disconnected" id=90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91 namespace=k8s.io Jan 30 13:16:47.577827 containerd[1502]: time="2025-01-30T13:16:47.577830428Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.593680 containerd[1502]: time="2025-01-30T13:16:47.593633504Z" level=info msg="StopContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" returns successfully" Jan 30 13:16:47.594340 containerd[1502]: time="2025-01-30T13:16:47.594290050Z" level=info msg="StopPodSandbox for \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\"" Jan 30 13:16:47.594461 containerd[1502]: time="2025-01-30T13:16:47.594338941Z" level=info msg="Container to stop \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.594461 containerd[1502]: time="2025-01-30T13:16:47.594372765Z" level=info msg="Container to stop \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.594461 containerd[1502]: time="2025-01-30T13:16:47.594380700Z" level=info msg="Container to stop \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.594461 containerd[1502]: time="2025-01-30T13:16:47.594388985Z" level=info msg="Container to stop \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.594461 containerd[1502]: time="2025-01-30T13:16:47.594397912Z" level=info msg="Container to stop \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:16:47.596633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1-shm.mount: Deactivated successfully. Jan 30 13:16:47.600284 systemd[1]: cri-containerd-f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1.scope: Deactivated successfully. Jan 30 13:16:47.617774 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1-rootfs.mount: Deactivated successfully. Jan 30 13:16:47.620644 containerd[1502]: time="2025-01-30T13:16:47.620583990Z" level=info msg="shim disconnected" id=f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1 namespace=k8s.io Jan 30 13:16:47.620748 containerd[1502]: time="2025-01-30T13:16:47.620643031Z" level=warning msg="cleaning up after shim disconnected" id=f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1 namespace=k8s.io Jan 30 13:16:47.620748 containerd[1502]: time="2025-01-30T13:16:47.620653701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:47.633597 containerd[1502]: time="2025-01-30T13:16:47.633560339Z" level=info msg="TearDown network for sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" successfully" Jan 30 13:16:47.633597 containerd[1502]: time="2025-01-30T13:16:47.633589333Z" level=info msg="StopPodSandbox for \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" returns successfully" Jan 30 13:16:47.686155 kubelet[1833]: I0130 13:16:47.686119 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-lib-modules\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686155 kubelet[1833]: I0130 13:16:47.686152 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686155 kubelet[1833]: I0130 13:16:47.686157 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-bpf-maps\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686155 kubelet[1833]: I0130 13:16:47.686173 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686193 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4tg\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-kube-api-access-7c4tg\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686225 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686208 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-etc-cni-netd\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686251 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-kernel\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686265 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-run\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686432 kubelet[1833]: I0130 13:16:47.686281 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-config-path\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686574 kubelet[1833]: I0130 13:16:47.686305 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-xtables-lock\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686574 kubelet[1833]: I0130 13:16:47.686322 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cf339f9-7212-4de3-a030-6a1b4749162b-clustermesh-secrets\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686574 kubelet[1833]: I0130 13:16:47.686330 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686574 kubelet[1833]: I0130 13:16:47.686334 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-net\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686574 kubelet[1833]: I0130 13:16:47.686346 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686370 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cni-path\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686397 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-cgroup\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686411 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-hostproc\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686427 1833 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-hubble-tls\") pod \"7cf339f9-7212-4de3-a030-6a1b4749162b\" (UID: \"7cf339f9-7212-4de3-a030-6a1b4749162b\") " Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686451 1833 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-etc-cni-netd\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686460 1833 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-run\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.686688 kubelet[1833]: I0130 13:16:47.686469 1833 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-net\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.686856 kubelet[1833]: I0130 13:16:47.686478 1833 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-lib-modules\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.686856 kubelet[1833]: I0130 13:16:47.686487 1833 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-bpf-maps\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.686856 kubelet[1833]: I0130 13:16:47.686702 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686856 kubelet[1833]: I0130 13:16:47.686723 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cni-path" (OuterVolumeSpecName: "cni-path") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.686856 kubelet[1833]: I0130 13:16:47.686736 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.687002 kubelet[1833]: I0130 13:16:47.686751 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.689330 kubelet[1833]: I0130 13:16:47.689035 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-hostproc" (OuterVolumeSpecName: "hostproc") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:16:47.689380 kubelet[1833]: I0130 13:16:47.689339 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-kube-api-access-7c4tg" (OuterVolumeSpecName: "kube-api-access-7c4tg") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "kube-api-access-7c4tg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:16:47.690311 kubelet[1833]: I0130 13:16:47.690244 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:16:47.690405 systemd[1]: var-lib-kubelet-pods-7cf339f9\x2d7212\x2d4de3\x2da030\x2d6a1b4749162b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7c4tg.mount: Deactivated successfully. Jan 30 13:16:47.690521 systemd[1]: var-lib-kubelet-pods-7cf339f9\x2d7212\x2d4de3\x2da030\x2d6a1b4749162b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:16:47.691240 kubelet[1833]: I0130 13:16:47.691065 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:16:47.691240 kubelet[1833]: I0130 13:16:47.691194 1833 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cf339f9-7212-4de3-a030-6a1b4749162b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7cf339f9-7212-4de3-a030-6a1b4749162b" (UID: "7cf339f9-7212-4de3-a030-6a1b4749162b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:16:47.787019 kubelet[1833]: I0130 13:16:47.786979 1833 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7c4tg\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-kube-api-access-7c4tg\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787019 kubelet[1833]: I0130 13:16:47.787013 1833 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-host-proc-sys-kernel\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787030 1833 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-config-path\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787047 1833 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-xtables-lock\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787059 1833 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7cf339f9-7212-4de3-a030-6a1b4749162b-hubble-tls\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787067 1833 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7cf339f9-7212-4de3-a030-6a1b4749162b-clustermesh-secrets\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787075 1833 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cni-path\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787083 1833 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-cilium-cgroup\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.787136 kubelet[1833]: I0130 13:16:47.787091 1833 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7cf339f9-7212-4de3-a030-6a1b4749162b-hostproc\") on node \"10.0.0.151\" DevicePath \"\"" Jan 30 13:16:47.958335 kubelet[1833]: I0130 13:16:47.958067 1833 scope.go:117] "RemoveContainer" containerID="90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91" Jan 30 13:16:47.964681 systemd[1]: Removed slice kubepods-burstable-pod7cf339f9_7212_4de3_a030_6a1b4749162b.slice - libcontainer container kubepods-burstable-pod7cf339f9_7212_4de3_a030_6a1b4749162b.slice. Jan 30 13:16:47.964796 systemd[1]: kubepods-burstable-pod7cf339f9_7212_4de3_a030_6a1b4749162b.slice: Consumed 6.729s CPU time. Jan 30 13:16:47.973251 containerd[1502]: time="2025-01-30T13:16:47.966682268Z" level=info msg="RemoveContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\"" Jan 30 13:16:47.980725 containerd[1502]: time="2025-01-30T13:16:47.979591160Z" level=info msg="RemoveContainer for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" returns successfully" Jan 30 13:16:47.980940 kubelet[1833]: I0130 13:16:47.980065 1833 scope.go:117] "RemoveContainer" containerID="bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d" Jan 30 13:16:47.981718 containerd[1502]: time="2025-01-30T13:16:47.981231135Z" level=info msg="RemoveContainer for \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\"" Jan 30 13:16:48.002589 containerd[1502]: time="2025-01-30T13:16:48.002502296Z" level=info msg="RemoveContainer for \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\" returns successfully" Jan 30 13:16:48.002946 kubelet[1833]: I0130 13:16:48.002906 1833 scope.go:117] "RemoveContainer" containerID="adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303" Jan 30 13:16:48.010136 containerd[1502]: time="2025-01-30T13:16:48.010018682Z" level=info msg="RemoveContainer for \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\"" Jan 30 13:16:48.020845 containerd[1502]: time="2025-01-30T13:16:48.020773759Z" level=info msg="RemoveContainer for \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\" returns successfully" Jan 30 13:16:48.025040 kubelet[1833]: I0130 13:16:48.024975 1833 scope.go:117] "RemoveContainer" containerID="2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547" Jan 30 13:16:48.026580 containerd[1502]: time="2025-01-30T13:16:48.026529444Z" level=info msg="RemoveContainer for \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\"" Jan 30 13:16:48.034924 containerd[1502]: time="2025-01-30T13:16:48.034813904Z" level=info msg="RemoveContainer for \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\" returns successfully" Jan 30 13:16:48.035254 kubelet[1833]: I0130 13:16:48.035203 1833 scope.go:117] "RemoveContainer" containerID="bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799" Jan 30 13:16:48.037472 containerd[1502]: time="2025-01-30T13:16:48.037077582Z" level=info msg="RemoveContainer for \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\"" Jan 30 13:16:48.042345 containerd[1502]: time="2025-01-30T13:16:48.042282711Z" level=info msg="RemoveContainer for \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\" returns successfully" Jan 30 13:16:48.042657 kubelet[1833]: I0130 13:16:48.042621 1833 scope.go:117] "RemoveContainer" containerID="90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91" Jan 30 13:16:48.043068 containerd[1502]: time="2025-01-30T13:16:48.043006082Z" level=error msg="ContainerStatus for \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\": not found" Jan 30 13:16:48.046989 kubelet[1833]: E0130 13:16:48.043237 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\": not found" containerID="90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91" Jan 30 13:16:48.046989 kubelet[1833]: I0130 13:16:48.043354 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91"} err="failed to get container status \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\": rpc error: code = NotFound desc = an error occurred when try to find container \"90d3505a0140348a0a993b992b814298af59b85056f7c2027867e5c593a31a91\": not found" Jan 30 13:16:48.046989 kubelet[1833]: I0130 13:16:48.043408 1833 scope.go:117] "RemoveContainer" containerID="bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d" Jan 30 13:16:48.046989 kubelet[1833]: E0130 13:16:48.043743 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\": not found" containerID="bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d" Jan 30 13:16:48.046989 kubelet[1833]: I0130 13:16:48.043765 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d"} err="failed to get container status \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\": not found" Jan 30 13:16:48.046989 kubelet[1833]: I0130 13:16:48.043782 1833 scope.go:117] "RemoveContainer" containerID="adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303" Jan 30 13:16:48.047246 containerd[1502]: time="2025-01-30T13:16:48.043617030Z" level=error msg="ContainerStatus for \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb6d5fedc7ef6886449691549663a0e57edbd23d0d2c5a638b34ccc189a7145d\": not found" Jan 30 13:16:48.047246 containerd[1502]: time="2025-01-30T13:16:48.043945599Z" level=error msg="ContainerStatus for \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\": not found" Jan 30 13:16:48.047246 containerd[1502]: time="2025-01-30T13:16:48.044311817Z" level=error msg="ContainerStatus for \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\": not found" Jan 30 13:16:48.047246 containerd[1502]: time="2025-01-30T13:16:48.044556377Z" level=error msg="ContainerStatus for \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\": not found" Jan 30 13:16:48.047380 kubelet[1833]: E0130 13:16:48.044069 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\": not found" containerID="adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303" Jan 30 13:16:48.047380 kubelet[1833]: I0130 13:16:48.044145 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303"} err="failed to get container status \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\": rpc error: code = NotFound desc = an error occurred when try to find container \"adac785fc8f1d0a93c71d90f9aa986814fe5a1a42c98927b053904cdfd9a5303\": not found" Jan 30 13:16:48.047380 kubelet[1833]: I0130 13:16:48.044161 1833 scope.go:117] "RemoveContainer" containerID="2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547" Jan 30 13:16:48.047380 kubelet[1833]: E0130 13:16:48.044403 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\": not found" containerID="2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547" Jan 30 13:16:48.047380 kubelet[1833]: I0130 13:16:48.044423 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547"} err="failed to get container status \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\": rpc error: code = NotFound desc = an error occurred when try to find container \"2eb04ae7e6b266cd262c0b0bba5260edae6932a7ef3eb07b0c4772fc8e370547\": not found" Jan 30 13:16:48.047380 kubelet[1833]: I0130 13:16:48.044443 1833 scope.go:117] "RemoveContainer" containerID="bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799" Jan 30 13:16:48.047553 kubelet[1833]: E0130 13:16:48.044658 1833 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\": not found" containerID="bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799" Jan 30 13:16:48.047553 kubelet[1833]: I0130 13:16:48.044708 1833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799"} err="failed to get container status \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcf21372e2ab3b5c071f642ea37687ea440d34362d3b09e32b4f658fd4a69799\": not found" Jan 30 13:16:48.303982 kubelet[1833]: E0130 13:16:48.303914 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:48.480903 systemd[1]: var-lib-kubelet-pods-7cf339f9\x2d7212\x2d4de3\x2da030\x2d6a1b4749162b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:16:48.863509 kubelet[1833]: I0130 13:16:48.863454 1833 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cf339f9-7212-4de3-a030-6a1b4749162b" path="/var/lib/kubelet/pods/7cf339f9-7212-4de3-a030-6a1b4749162b/volumes" Jan 30 13:16:49.305081 kubelet[1833]: E0130 13:16:49.305004 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:50.095052 kubelet[1833]: I0130 13:16:50.094997 1833 memory_manager.go:355] "RemoveStaleState removing state" podUID="7cf339f9-7212-4de3-a030-6a1b4749162b" containerName="cilium-agent" Jan 30 13:16:50.098093 kubelet[1833]: I0130 13:16:50.098053 1833 status_manager.go:890] "Failed to get status for pod" podUID="adacaf38-8946-4ae4-ab44-7ff00b98204f" pod="kube-system/cilium-operator-6c4d7847fc-vdvpb" err="pods \"cilium-operator-6c4d7847fc-vdvpb\" is forbidden: User \"system:node:10.0.0.151\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.151' and this object" Jan 30 13:16:50.100383 systemd[1]: Created slice kubepods-besteffort-podadacaf38_8946_4ae4_ab44_7ff00b98204f.slice - libcontainer container kubepods-besteffort-podadacaf38_8946_4ae4_ab44_7ff00b98204f.slice. Jan 30 13:16:50.101398 kubelet[1833]: W0130 13:16:50.100672 1833 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.151" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.151' and this object Jan 30 13:16:50.101398 kubelet[1833]: E0130 13:16:50.100711 1833 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.151\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.151' and this object" logger="UnhandledError" Jan 30 13:16:50.107978 systemd[1]: Created slice kubepods-burstable-podf4917941_8c32_4dee_ae63_66a81baab5fe.slice - libcontainer container kubepods-burstable-podf4917941_8c32_4dee_ae63_66a81baab5fe.slice. Jan 30 13:16:50.203400 kubelet[1833]: I0130 13:16:50.203368 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-hostproc\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203400 kubelet[1833]: I0130 13:16:50.203394 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-cgroup\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203400 kubelet[1833]: I0130 13:16:50.203412 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-cni-path\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203400 kubelet[1833]: I0130 13:16:50.203426 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-etc-cni-netd\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203440 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-lib-modules\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203454 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f4917941-8c32-4dee-ae63-66a81baab5fe-clustermesh-secrets\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203468 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-ipsec-secrets\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203516 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-host-proc-sys-kernel\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203567 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f4917941-8c32-4dee-ae63-66a81baab5fe-hubble-tls\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203690 kubelet[1833]: I0130 13:16:50.203590 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-run\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203910 kubelet[1833]: I0130 13:16:50.203609 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gq9z\" (UniqueName: \"kubernetes.io/projected/f4917941-8c32-4dee-ae63-66a81baab5fe-kube-api-access-8gq9z\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203910 kubelet[1833]: I0130 13:16:50.203631 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2qb5\" (UniqueName: \"kubernetes.io/projected/adacaf38-8946-4ae4-ab44-7ff00b98204f-kube-api-access-j2qb5\") pod \"cilium-operator-6c4d7847fc-vdvpb\" (UID: \"adacaf38-8946-4ae4-ab44-7ff00b98204f\") " pod="kube-system/cilium-operator-6c4d7847fc-vdvpb" Jan 30 13:16:50.203910 kubelet[1833]: I0130 13:16:50.203656 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-bpf-maps\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203910 kubelet[1833]: I0130 13:16:50.203679 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-host-proc-sys-net\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.203910 kubelet[1833]: I0130 13:16:50.203702 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/adacaf38-8946-4ae4-ab44-7ff00b98204f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vdvpb\" (UID: \"adacaf38-8946-4ae4-ab44-7ff00b98204f\") " pod="kube-system/cilium-operator-6c4d7847fc-vdvpb" Jan 30 13:16:50.204036 kubelet[1833]: I0130 13:16:50.203732 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4917941-8c32-4dee-ae63-66a81baab5fe-xtables-lock\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.204036 kubelet[1833]: I0130 13:16:50.203775 1833 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-config-path\") pod \"cilium-zd6zc\" (UID: \"f4917941-8c32-4dee-ae63-66a81baab5fe\") " pod="kube-system/cilium-zd6zc" Jan 30 13:16:50.305160 kubelet[1833]: E0130 13:16:50.305119 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:51.304995 kubelet[1833]: E0130 13:16:51.304941 1833 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:16:51.305162 kubelet[1833]: E0130 13:16:51.305035 1833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/adacaf38-8946-4ae4-ab44-7ff00b98204f-cilium-config-path podName:adacaf38-8946-4ae4-ab44-7ff00b98204f nodeName:}" failed. No retries permitted until 2025-01-30 13:16:51.805013004 +0000 UTC m=+55.944865638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/adacaf38-8946-4ae4-ab44-7ff00b98204f-cilium-config-path") pod "cilium-operator-6c4d7847fc-vdvpb" (UID: "adacaf38-8946-4ae4-ab44-7ff00b98204f") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:16:51.305162 kubelet[1833]: E0130 13:16:51.304948 1833 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 30 13:16:51.305162 kubelet[1833]: E0130 13:16:51.305100 1833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-config-path podName:f4917941-8c32-4dee-ae63-66a81baab5fe nodeName:}" failed. No retries permitted until 2025-01-30 13:16:51.805089779 +0000 UTC m=+55.944942413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/f4917941-8c32-4dee-ae63-66a81baab5fe-cilium-config-path") pod "cilium-zd6zc" (UID: "f4917941-8c32-4dee-ae63-66a81baab5fe") : failed to sync configmap cache: timed out waiting for the condition Jan 30 13:16:51.305596 kubelet[1833]: E0130 13:16:51.305239 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:51.872509 kubelet[1833]: E0130 13:16:51.872463 1833 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:16:51.903131 kubelet[1833]: E0130 13:16:51.903058 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:51.903625 containerd[1502]: time="2025-01-30T13:16:51.903584801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vdvpb,Uid:adacaf38-8946-4ae4-ab44-7ff00b98204f,Namespace:kube-system,Attempt:0,}" Jan 30 13:16:51.921379 kubelet[1833]: E0130 13:16:51.921339 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:51.921855 containerd[1502]: time="2025-01-30T13:16:51.921826381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zd6zc,Uid:f4917941-8c32-4dee-ae63-66a81baab5fe,Namespace:kube-system,Attempt:0,}" Jan 30 13:16:51.924418 containerd[1502]: time="2025-01-30T13:16:51.924210002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:51.924418 containerd[1502]: time="2025-01-30T13:16:51.924274092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:51.924418 containerd[1502]: time="2025-01-30T13:16:51.924287026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:51.924523 containerd[1502]: time="2025-01-30T13:16:51.924391123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:51.947107 systemd[1]: Started cri-containerd-b1973bf0d3030783f2658e152d52aa0efbced18a7e81265ecb152520c02180de.scope - libcontainer container b1973bf0d3030783f2658e152d52aa0efbced18a7e81265ecb152520c02180de. Jan 30 13:16:51.953904 containerd[1502]: time="2025-01-30T13:16:51.953649211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:16:51.953904 containerd[1502]: time="2025-01-30T13:16:51.953713982Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:16:51.953904 containerd[1502]: time="2025-01-30T13:16:51.953732487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:51.953904 containerd[1502]: time="2025-01-30T13:16:51.953820512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:16:51.977169 systemd[1]: Started cri-containerd-38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11.scope - libcontainer container 38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11. Jan 30 13:16:51.993172 containerd[1502]: time="2025-01-30T13:16:51.993095270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vdvpb,Uid:adacaf38-8946-4ae4-ab44-7ff00b98204f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1973bf0d3030783f2658e152d52aa0efbced18a7e81265ecb152520c02180de\"" Jan 30 13:16:51.994474 kubelet[1833]: E0130 13:16:51.994450 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:51.996667 containerd[1502]: time="2025-01-30T13:16:51.996633610Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:16:52.002533 containerd[1502]: time="2025-01-30T13:16:52.002498275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zd6zc,Uid:f4917941-8c32-4dee-ae63-66a81baab5fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\"" Jan 30 13:16:52.003294 kubelet[1833]: E0130 13:16:52.003135 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:52.004460 containerd[1502]: time="2025-01-30T13:16:52.004434875Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:16:52.017429 containerd[1502]: time="2025-01-30T13:16:52.017375526Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca\"" Jan 30 13:16:52.017890 containerd[1502]: time="2025-01-30T13:16:52.017839368Z" level=info msg="StartContainer for \"30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca\"" Jan 30 13:16:52.043059 systemd[1]: Started cri-containerd-30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca.scope - libcontainer container 30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca. Jan 30 13:16:52.067568 containerd[1502]: time="2025-01-30T13:16:52.067523418Z" level=info msg="StartContainer for \"30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca\" returns successfully" Jan 30 13:16:52.075678 systemd[1]: cri-containerd-30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca.scope: Deactivated successfully. Jan 30 13:16:52.107994 containerd[1502]: time="2025-01-30T13:16:52.107912027Z" level=info msg="shim disconnected" id=30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca namespace=k8s.io Jan 30 13:16:52.107994 containerd[1502]: time="2025-01-30T13:16:52.107977681Z" level=warning msg="cleaning up after shim disconnected" id=30289daf60a509a29c987dd5aecea00a15bf9b893f951c4330321752138b39ca namespace=k8s.io Jan 30 13:16:52.107994 containerd[1502]: time="2025-01-30T13:16:52.107989033Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:52.306009 kubelet[1833]: E0130 13:16:52.305955 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:52.968931 kubelet[1833]: E0130 13:16:52.968901 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:52.970232 containerd[1502]: time="2025-01-30T13:16:52.970200802Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:16:52.983038 containerd[1502]: time="2025-01-30T13:16:52.982993155Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8\"" Jan 30 13:16:52.983490 containerd[1502]: time="2025-01-30T13:16:52.983451216Z" level=info msg="StartContainer for \"e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8\"" Jan 30 13:16:53.017383 systemd[1]: Started cri-containerd-e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8.scope - libcontainer container e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8. Jan 30 13:16:53.044175 containerd[1502]: time="2025-01-30T13:16:53.044138014Z" level=info msg="StartContainer for \"e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8\" returns successfully" Jan 30 13:16:53.051216 systemd[1]: cri-containerd-e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8.scope: Deactivated successfully. Jan 30 13:16:53.073299 containerd[1502]: time="2025-01-30T13:16:53.073230866Z" level=info msg="shim disconnected" id=e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8 namespace=k8s.io Jan 30 13:16:53.073299 containerd[1502]: time="2025-01-30T13:16:53.073284116Z" level=warning msg="cleaning up after shim disconnected" id=e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8 namespace=k8s.io Jan 30 13:16:53.073299 containerd[1502]: time="2025-01-30T13:16:53.073292702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:53.306119 kubelet[1833]: E0130 13:16:53.306086 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:53.518106 containerd[1502]: time="2025-01-30T13:16:53.518055550Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:53.518802 containerd[1502]: time="2025-01-30T13:16:53.518760706Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jan 30 13:16:53.519727 containerd[1502]: time="2025-01-30T13:16:53.519697076Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:16:53.521010 containerd[1502]: time="2025-01-30T13:16:53.520968385Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.524295941s" Jan 30 13:16:53.521010 containerd[1502]: time="2025-01-30T13:16:53.521005494Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jan 30 13:16:53.522684 containerd[1502]: time="2025-01-30T13:16:53.522663920Z" level=info msg="CreateContainer within sandbox \"b1973bf0d3030783f2658e152d52aa0efbced18a7e81265ecb152520c02180de\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:16:53.534659 containerd[1502]: time="2025-01-30T13:16:53.534617994Z" level=info msg="CreateContainer within sandbox \"b1973bf0d3030783f2658e152d52aa0efbced18a7e81265ecb152520c02180de\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"fd5ba5d0104234a8ec6ff06584b3fce380c9f10406b3d0fa0feaefe1af592892\"" Jan 30 13:16:53.534979 containerd[1502]: time="2025-01-30T13:16:53.534922387Z" level=info msg="StartContainer for \"fd5ba5d0104234a8ec6ff06584b3fce380c9f10406b3d0fa0feaefe1af592892\"" Jan 30 13:16:53.564991 systemd[1]: Started cri-containerd-fd5ba5d0104234a8ec6ff06584b3fce380c9f10406b3d0fa0feaefe1af592892.scope - libcontainer container fd5ba5d0104234a8ec6ff06584b3fce380c9f10406b3d0fa0feaefe1af592892. Jan 30 13:16:53.590112 containerd[1502]: time="2025-01-30T13:16:53.590075171Z" level=info msg="StartContainer for \"fd5ba5d0104234a8ec6ff06584b3fce380c9f10406b3d0fa0feaefe1af592892\" returns successfully" Jan 30 13:16:53.913756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1a8780720dd1a8166dd5277be783af474ae0a850e4b5c39f9d553fe1b2f04a8-rootfs.mount: Deactivated successfully. Jan 30 13:16:53.972369 kubelet[1833]: E0130 13:16:53.972342 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:53.973275 kubelet[1833]: E0130 13:16:53.973239 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:53.973631 containerd[1502]: time="2025-01-30T13:16:53.973592615Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:16:53.991489 containerd[1502]: time="2025-01-30T13:16:53.991450915Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb\"" Jan 30 13:16:53.991585 kubelet[1833]: I0130 13:16:53.991457 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vdvpb" podStartSLOduration=2.465057161 podStartE2EDuration="3.991442659s" podCreationTimestamp="2025-01-30 13:16:50 +0000 UTC" firstStartedPulling="2025-01-30 13:16:51.995264727 +0000 UTC m=+56.135117361" lastFinishedPulling="2025-01-30 13:16:53.521650225 +0000 UTC m=+57.661502859" observedRunningTime="2025-01-30 13:16:53.991181338 +0000 UTC m=+58.131033972" watchObservedRunningTime="2025-01-30 13:16:53.991442659 +0000 UTC m=+58.131295294" Jan 30 13:16:53.992006 containerd[1502]: time="2025-01-30T13:16:53.991968057Z" level=info msg="StartContainer for \"f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb\"" Jan 30 13:16:54.023003 systemd[1]: Started cri-containerd-f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb.scope - libcontainer container f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb. Jan 30 13:16:54.055950 systemd[1]: cri-containerd-f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb.scope: Deactivated successfully. Jan 30 13:16:54.216949 containerd[1502]: time="2025-01-30T13:16:54.216824631Z" level=info msg="StartContainer for \"f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb\" returns successfully" Jan 30 13:16:54.306522 kubelet[1833]: E0130 13:16:54.306472 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:54.324537 containerd[1502]: time="2025-01-30T13:16:54.324481726Z" level=info msg="shim disconnected" id=f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb namespace=k8s.io Jan 30 13:16:54.324537 containerd[1502]: time="2025-01-30T13:16:54.324534074Z" level=warning msg="cleaning up after shim disconnected" id=f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb namespace=k8s.io Jan 30 13:16:54.324651 containerd[1502]: time="2025-01-30T13:16:54.324541959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:54.912892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0f3786c95ba2b5883b343b391d46cd9fef9e863a6a520b3f84026061477c0cb-rootfs.mount: Deactivated successfully. Jan 30 13:16:54.976717 kubelet[1833]: E0130 13:16:54.976690 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:54.976903 kubelet[1833]: E0130 13:16:54.976731 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:54.978191 containerd[1502]: time="2025-01-30T13:16:54.978157149Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:16:54.992334 containerd[1502]: time="2025-01-30T13:16:54.992288772Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669\"" Jan 30 13:16:54.992713 containerd[1502]: time="2025-01-30T13:16:54.992686479Z" level=info msg="StartContainer for \"2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669\"" Jan 30 13:16:55.019999 systemd[1]: Started cri-containerd-2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669.scope - libcontainer container 2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669. Jan 30 13:16:55.043674 systemd[1]: cri-containerd-2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669.scope: Deactivated successfully. Jan 30 13:16:55.046870 containerd[1502]: time="2025-01-30T13:16:55.046827018Z" level=info msg="StartContainer for \"2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669\" returns successfully" Jan 30 13:16:55.069965 containerd[1502]: time="2025-01-30T13:16:55.069886632Z" level=info msg="shim disconnected" id=2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669 namespace=k8s.io Jan 30 13:16:55.069965 containerd[1502]: time="2025-01-30T13:16:55.069930524Z" level=warning msg="cleaning up after shim disconnected" id=2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669 namespace=k8s.io Jan 30 13:16:55.069965 containerd[1502]: time="2025-01-30T13:16:55.069938058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:16:55.306991 kubelet[1833]: E0130 13:16:55.306941 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:55.912961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2891367a96cf5608fe6344d44d4dda7dc3093421d50774048686a1a63b833669-rootfs.mount: Deactivated successfully. Jan 30 13:16:55.979799 kubelet[1833]: E0130 13:16:55.979737 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:55.981082 containerd[1502]: time="2025-01-30T13:16:55.981053450Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:16:55.998613 containerd[1502]: time="2025-01-30T13:16:55.998571382Z" level=info msg="CreateContainer within sandbox \"38fb2a46712e8c5b8d64eedbccc186f09525b081688ce09ea4733b5f14e1af11\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a\"" Jan 30 13:16:55.999041 containerd[1502]: time="2025-01-30T13:16:55.999015135Z" level=info msg="StartContainer for \"05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a\"" Jan 30 13:16:56.026005 systemd[1]: Started cri-containerd-05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a.scope - libcontainer container 05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a. Jan 30 13:16:56.061267 containerd[1502]: time="2025-01-30T13:16:56.061217886Z" level=info msg="StartContainer for \"05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a\" returns successfully" Jan 30 13:16:56.270032 kubelet[1833]: E0130 13:16:56.269582 1833 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:56.289821 containerd[1502]: time="2025-01-30T13:16:56.289789357Z" level=info msg="StopPodSandbox for \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\"" Jan 30 13:16:56.289936 containerd[1502]: time="2025-01-30T13:16:56.289907329Z" level=info msg="TearDown network for sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" successfully" Jan 30 13:16:56.289936 containerd[1502]: time="2025-01-30T13:16:56.289921786Z" level=info msg="StopPodSandbox for \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" returns successfully" Jan 30 13:16:56.290380 containerd[1502]: time="2025-01-30T13:16:56.290357625Z" level=info msg="RemovePodSandbox for \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\"" Jan 30 13:16:56.290485 containerd[1502]: time="2025-01-30T13:16:56.290383213Z" level=info msg="Forcibly stopping sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\"" Jan 30 13:16:56.290485 containerd[1502]: time="2025-01-30T13:16:56.290451942Z" level=info msg="TearDown network for sandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" successfully" Jan 30 13:16:56.307040 kubelet[1833]: E0130 13:16:56.307013 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:56.376824 containerd[1502]: time="2025-01-30T13:16:56.376786335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:16:56.376905 containerd[1502]: time="2025-01-30T13:16:56.376843973Z" level=info msg="RemovePodSandbox \"f339756ff047e93634fcc57fad0bed4ebdeda902b1a27fc41573b2e14ac8cfc1\" returns successfully" Jan 30 13:16:56.442906 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jan 30 13:16:56.982674 kubelet[1833]: E0130 13:16:56.982647 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:56.994388 kubelet[1833]: I0130 13:16:56.994341 1833 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zd6zc" podStartSLOduration=6.994330116 podStartE2EDuration="6.994330116s" podCreationTimestamp="2025-01-30 13:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:16:56.994316611 +0000 UTC m=+61.134169245" watchObservedRunningTime="2025-01-30 13:16:56.994330116 +0000 UTC m=+61.134182750" Jan 30 13:16:57.307994 kubelet[1833]: E0130 13:16:57.307970 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:57.984215 kubelet[1833]: E0130 13:16:57.984189 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:58.308247 kubelet[1833]: E0130 13:16:58.308211 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:58.647673 systemd[1]: run-containerd-runc-k8s.io-05e0d44bd893dc994a48cfef85811319a503327d6f0ef6f904b26dee4e93ee6a-runc.valb3r.mount: Deactivated successfully. Jan 30 13:16:59.308983 kubelet[1833]: E0130 13:16:59.308942 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:16:59.359541 systemd-networkd[1416]: lxc_health: Link UP Jan 30 13:16:59.370001 systemd-networkd[1416]: lxc_health: Gained carrier Jan 30 13:16:59.923040 kubelet[1833]: E0130 13:16:59.922988 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:16:59.987416 kubelet[1833]: E0130 13:16:59.987139 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:17:00.309488 kubelet[1833]: E0130 13:17:00.309432 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:00.922109 systemd-networkd[1416]: lxc_health: Gained IPv6LL Jan 30 13:17:00.989178 kubelet[1833]: E0130 13:17:00.989150 1833 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:17:01.309729 kubelet[1833]: E0130 13:17:01.309690 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:02.310605 kubelet[1833]: E0130 13:17:02.310536 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:03.311077 kubelet[1833]: E0130 13:17:03.311016 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:04.311387 kubelet[1833]: E0130 13:17:04.311341 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:05.312294 kubelet[1833]: E0130 13:17:05.312256 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:17:06.312958 kubelet[1833]: E0130 13:17:06.312906 1833 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"