Feb 13 19:41:59.975561 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 19:41:59.975628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:41:59.975668 kernel: BIOS-provided physical RAM map: Feb 13 19:41:59.975685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:41:59.975704 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:41:59.975729 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:41:59.975750 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:41:59.975770 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:41:59.975791 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:41:59.975810 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:41:59.975833 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Feb 13 19:41:59.975852 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:41:59.975874 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:41:59.977064 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:41:59.977091 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:41:59.977099 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:41:59.977119 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:41:59.977133 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:41:59.977141 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:41:59.977155 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:41:59.977170 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:41:59.977178 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:41:59.977188 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:41:59.977200 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:41:59.977214 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:41:59.977229 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:41:59.977243 kernel: NX (Execute Disable) protection: active Feb 13 19:41:59.977260 kernel: APIC: Static calls initialized Feb 13 19:41:59.977280 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:41:59.977290 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Feb 13 19:41:59.977296 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:41:59.977303 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Feb 13 19:41:59.977309 kernel: extended physical RAM map: Feb 13 19:41:59.977318 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 13 19:41:59.977335 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Feb 13 19:41:59.977352 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Feb 13 19:41:59.977360 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Feb 13 19:41:59.977367 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Feb 13 19:41:59.977379 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Feb 13 19:41:59.977386 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Feb 13 19:41:59.977397 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Feb 13 19:41:59.977404 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Feb 13 19:41:59.977418 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Feb 13 19:41:59.977427 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Feb 13 19:41:59.977434 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Feb 13 19:41:59.977494 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Feb 13 19:41:59.977501 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Feb 13 19:41:59.977508 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Feb 13 19:41:59.977515 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Feb 13 19:41:59.977523 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Feb 13 19:41:59.977537 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Feb 13 19:41:59.977546 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Feb 13 19:41:59.977553 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Feb 13 19:41:59.977560 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Feb 13 19:41:59.977572 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Feb 13 19:41:59.977589 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Feb 13 19:41:59.977604 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Feb 13 19:41:59.977619 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:41:59.977630 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Feb 13 19:41:59.977641 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:41:59.977659 kernel: efi: EFI v2.7 by EDK II Feb 13 19:41:59.977678 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Feb 13 19:41:59.977695 kernel: random: crng init done Feb 13 19:41:59.977720 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Feb 13 19:41:59.977743 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Feb 13 19:41:59.977770 kernel: secureboot: Secure boot disabled Feb 13 19:41:59.977783 kernel: SMBIOS 2.8 present. Feb 13 19:41:59.977797 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Feb 13 19:41:59.977814 kernel: Hypervisor detected: KVM Feb 13 19:41:59.977827 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:41:59.977834 kernel: kvm-clock: using sched offset of 4219429290 cycles Feb 13 19:41:59.977842 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:41:59.977849 kernel: tsc: Detected 2794.750 MHz processor Feb 13 19:41:59.977865 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:41:59.977876 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:41:59.977893 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Feb 13 19:41:59.977911 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Feb 13 19:41:59.977934 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:41:59.977948 kernel: Using GB pages for direct mapping Feb 13 19:41:59.977955 kernel: ACPI: Early table checksum verification disabled Feb 13 19:41:59.977963 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Feb 13 19:41:59.977970 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Feb 13 19:41:59.977977 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.977985 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.977995 kernel: ACPI: FACS 0x000000009CBDD000 000040 Feb 13 19:41:59.978002 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.978009 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.978017 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.978024 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:59.978031 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Feb 13 19:41:59.978039 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Feb 13 19:41:59.978046 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Feb 13 19:41:59.978053 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Feb 13 19:41:59.978063 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Feb 13 19:41:59.978076 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Feb 13 19:41:59.978096 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Feb 13 19:41:59.978114 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Feb 13 19:41:59.978136 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Feb 13 19:41:59.978161 kernel: No NUMA configuration found Feb 13 19:41:59.978181 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Feb 13 19:41:59.978203 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Feb 13 19:41:59.978219 kernel: Zone ranges: Feb 13 19:41:59.978242 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:41:59.978260 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Feb 13 19:41:59.978278 kernel: Normal empty Feb 13 19:41:59.978301 kernel: Movable zone start for each node Feb 13 19:41:59.978317 kernel: Early memory node ranges Feb 13 19:41:59.978332 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 13 19:41:59.978340 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Feb 13 19:41:59.978350 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Feb 13 19:41:59.978368 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Feb 13 19:41:59.978383 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Feb 13 19:41:59.978405 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Feb 13 19:41:59.978427 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Feb 13 19:41:59.978657 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Feb 13 19:41:59.978668 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Feb 13 19:41:59.978675 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:41:59.978683 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 13 19:41:59.978726 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Feb 13 19:41:59.978744 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:41:59.978752 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Feb 13 19:41:59.978759 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Feb 13 19:41:59.978772 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Feb 13 19:41:59.978782 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Feb 13 19:41:59.978793 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Feb 13 19:41:59.978800 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:41:59.978810 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:41:59.978823 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:41:59.978847 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:41:59.978863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:41:59.978872 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:41:59.978879 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:41:59.978892 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:41:59.978900 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:41:59.978907 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:41:59.978914 kernel: TSC deadline timer available Feb 13 19:41:59.978922 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:41:59.978946 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:41:59.978964 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:41:59.978982 kernel: kvm-guest: setup PV sched yield Feb 13 19:41:59.978995 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Feb 13 19:41:59.979003 kernel: Booting paravirtualized kernel on KVM Feb 13 19:41:59.979011 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:41:59.979019 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:41:59.979030 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:41:59.979039 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:41:59.979054 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:41:59.979065 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:41:59.979072 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:41:59.979081 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:41:59.979089 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:41:59.979097 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:41:59.979107 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:41:59.979115 kernel: Fallback order for Node 0: 0 Feb 13 19:41:59.979122 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Feb 13 19:41:59.979132 kernel: Policy zone: DMA32 Feb 13 19:41:59.979140 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:41:59.979148 kernel: Memory: 2389768K/2565800K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 175776K reserved, 0K cma-reserved) Feb 13 19:41:59.979156 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:41:59.979163 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 19:41:59.979171 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:41:59.979178 kernel: Dynamic Preempt: voluntary Feb 13 19:41:59.979186 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:41:59.979194 kernel: rcu: RCU event tracing is enabled. Feb 13 19:41:59.979204 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:41:59.979212 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:41:59.979223 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:41:59.979242 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:41:59.979258 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:41:59.979276 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:41:59.979289 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:41:59.979305 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:41:59.979320 kernel: Console: colour dummy device 80x25 Feb 13 19:41:59.979350 kernel: printk: console [ttyS0] enabled Feb 13 19:41:59.979370 kernel: ACPI: Core revision 20230628 Feb 13 19:41:59.979386 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:41:59.979407 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:41:59.979422 kernel: x2apic enabled Feb 13 19:41:59.979443 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:41:59.980391 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:41:59.980409 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:41:59.980430 kernel: kvm-guest: setup PV IPIs Feb 13 19:41:59.980443 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:41:59.980476 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:41:59.980491 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Feb 13 19:41:59.980514 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:41:59.980524 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:41:59.980531 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:41:59.980549 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:41:59.980569 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:41:59.980582 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:41:59.980614 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:41:59.980633 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:41:59.980652 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:41:59.980671 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:41:59.980690 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:41:59.980711 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:41:59.980735 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:41:59.980765 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:41:59.980782 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:41:59.980791 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:41:59.980799 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:41:59.980806 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:41:59.980814 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:41:59.980825 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:41:59.980838 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:41:59.980848 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:41:59.980856 kernel: landlock: Up and running. Feb 13 19:41:59.980867 kernel: SELinux: Initializing. Feb 13 19:41:59.980883 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:59.980891 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:59.980899 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:41:59.980914 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:59.980925 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:59.980932 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:59.980940 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:41:59.980958 kernel: ... version: 0 Feb 13 19:41:59.980990 kernel: ... bit width: 48 Feb 13 19:41:59.981009 kernel: ... generic registers: 6 Feb 13 19:41:59.981027 kernel: ... value mask: 0000ffffffffffff Feb 13 19:41:59.981046 kernel: ... max period: 00007fffffffffff Feb 13 19:41:59.981064 kernel: ... fixed-purpose events: 0 Feb 13 19:41:59.981082 kernel: ... event mask: 000000000000003f Feb 13 19:41:59.981098 kernel: signal: max sigframe size: 1776 Feb 13 19:41:59.981116 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:41:59.981126 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:41:59.981139 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:41:59.981147 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:41:59.981154 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:41:59.981162 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:41:59.981169 kernel: smpboot: Max logical packages: 1 Feb 13 19:41:59.981177 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Feb 13 19:41:59.981185 kernel: devtmpfs: initialized Feb 13 19:41:59.981193 kernel: x86/mm: Memory block size: 128MB Feb 13 19:41:59.981201 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Feb 13 19:41:59.981213 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Feb 13 19:41:59.981221 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Feb 13 19:41:59.981231 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Feb 13 19:41:59.981238 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Feb 13 19:41:59.981246 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Feb 13 19:41:59.981254 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:41:59.981262 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:41:59.981269 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:41:59.981277 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:41:59.981289 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:41:59.981297 kernel: audit: type=2000 audit(1739475718.791:1): state=initialized audit_enabled=0 res=1 Feb 13 19:41:59.981305 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:41:59.981316 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:41:59.981332 kernel: cpuidle: using governor menu Feb 13 19:41:59.981340 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:41:59.981348 kernel: dca service started, version 1.12.1 Feb 13 19:41:59.981356 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Feb 13 19:41:59.981364 kernel: PCI: Using configuration type 1 for base access Feb 13 19:41:59.981377 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:41:59.981385 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:41:59.981393 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:41:59.981400 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:41:59.981408 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:41:59.981416 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:41:59.981423 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:41:59.981435 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:41:59.981490 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:41:59.981522 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:41:59.981531 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:41:59.981550 kernel: ACPI: Interpreter enabled Feb 13 19:41:59.981566 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:41:59.981574 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:41:59.981582 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:41:59.981590 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:41:59.981610 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:41:59.981656 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:41:59.984652 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:41:59.984945 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:41:59.985100 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:41:59.985114 kernel: PCI host bridge to bus 0000:00 Feb 13 19:41:59.985440 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:41:59.985801 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:41:59.986107 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:41:59.986440 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Feb 13 19:41:59.986738 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Feb 13 19:41:59.986973 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:41:59.987127 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:41:59.988961 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:41:59.989501 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:41:59.989820 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Feb 13 19:41:59.990192 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Feb 13 19:41:59.990670 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Feb 13 19:41:59.990801 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Feb 13 19:41:59.990941 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:41:59.991387 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:41:59.991848 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Feb 13 19:41:59.997048 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Feb 13 19:41:59.997567 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Feb 13 19:42:00.000138 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:42:00.003633 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Feb 13 19:42:00.007120 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Feb 13 19:42:00.007282 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Feb 13 19:42:00.007490 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:42:00.007624 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Feb 13 19:42:00.007765 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Feb 13 19:42:00.007901 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Feb 13 19:42:00.008028 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Feb 13 19:42:00.010639 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:42:00.010824 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:42:00.011015 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 16601 usecs Feb 13 19:42:00.011201 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:42:00.011520 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Feb 13 19:42:00.011684 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Feb 13 19:42:00.011877 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:42:00.012039 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Feb 13 19:42:00.012055 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:42:00.012078 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:42:00.012089 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:42:00.012099 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:42:00.012110 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:42:00.012121 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:42:00.012132 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:42:00.012143 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:42:00.012154 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:42:00.012164 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:42:00.012181 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:42:00.012192 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:42:00.012203 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:42:00.012214 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:42:00.012225 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:42:00.012236 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:42:00.012247 kernel: iommu: Default domain type: Translated Feb 13 19:42:00.012258 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:42:00.012268 kernel: efivars: Registered efivars operations Feb 13 19:42:00.012286 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:42:00.012297 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:42:00.012309 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Feb 13 19:42:00.012319 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Feb 13 19:42:00.012330 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Feb 13 19:42:00.012341 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Feb 13 19:42:00.012351 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Feb 13 19:42:00.012362 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Feb 13 19:42:00.012373 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Feb 13 19:42:00.012393 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Feb 13 19:42:00.012588 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:42:00.012752 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:42:00.012941 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:42:00.012958 kernel: vgaarb: loaded Feb 13 19:42:00.012969 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:42:00.012979 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:42:00.012989 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:42:00.013010 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:42:00.013021 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:42:00.013031 kernel: pnp: PnP ACPI init Feb 13 19:42:00.013272 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Feb 13 19:42:00.013290 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:42:00.013301 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:42:00.013353 kernel: NET: Registered PF_INET protocol family Feb 13 19:42:00.013370 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:42:00.013386 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:42:00.013396 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:42:00.013410 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:42:00.013421 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:42:00.013431 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:42:00.013478 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:42:00.013490 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:42:00.013500 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:42:00.013511 kernel: NET: Registered PF_XDP protocol family Feb 13 19:42:00.013696 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Feb 13 19:42:00.013858 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Feb 13 19:42:00.014015 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:42:00.014167 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:42:00.014321 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:42:00.014534 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Feb 13 19:42:00.014708 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Feb 13 19:42:00.014923 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Feb 13 19:42:00.014951 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:42:00.014964 kernel: Initialise system trusted keyrings Feb 13 19:42:00.014975 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:42:00.014986 kernel: Key type asymmetric registered Feb 13 19:42:00.014997 kernel: Asymmetric key parser 'x509' registered Feb 13 19:42:00.015009 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:42:00.015020 kernel: io scheduler mq-deadline registered Feb 13 19:42:00.015031 kernel: io scheduler kyber registered Feb 13 19:42:00.015042 kernel: io scheduler bfq registered Feb 13 19:42:00.015057 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:42:00.015070 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:42:00.015084 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:42:00.015100 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:42:00.015114 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:42:00.015125 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:42:00.015140 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:42:00.015151 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:42:00.015163 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:42:00.015361 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:42:00.015383 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:42:00.015578 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:42:00.015734 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:41:59 UTC (1739475719) Feb 13 19:42:00.015895 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Feb 13 19:42:00.015911 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:42:00.015923 kernel: efifb: probing for efifb Feb 13 19:42:00.015934 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Feb 13 19:42:00.015946 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Feb 13 19:42:00.015957 kernel: efifb: scrolling: redraw Feb 13 19:42:00.015978 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 13 19:42:00.015989 kernel: Console: switching to colour frame buffer device 160x50 Feb 13 19:42:00.016000 kernel: fb0: EFI VGA frame buffer device Feb 13 19:42:00.016014 kernel: pstore: Using crash dump compression: deflate Feb 13 19:42:00.016025 kernel: pstore: Registered efi_pstore as persistent store backend Feb 13 19:42:00.016036 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:42:00.016047 kernel: Segment Routing with IPv6 Feb 13 19:42:00.016058 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:42:00.016069 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:42:00.016080 kernel: Key type dns_resolver registered Feb 13 19:42:00.016091 kernel: IPI shorthand broadcast: enabled Feb 13 19:42:00.016102 kernel: sched_clock: Marking stable (1340002960, 229942653)->(1616446687, -46501074) Feb 13 19:42:00.016113 kernel: registered taskstats version 1 Feb 13 19:42:00.016129 kernel: Loading compiled-in X.509 certificates Feb 13 19:42:00.016140 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 19:42:00.016155 kernel: Key type .fscrypt registered Feb 13 19:42:00.016165 kernel: Key type fscrypt-provisioning registered Feb 13 19:42:00.016177 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:42:00.016188 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:42:00.016199 kernel: ima: No architecture policies found Feb 13 19:42:00.016210 kernel: clk: Disabling unused clocks Feb 13 19:42:00.016231 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 19:42:00.016242 kernel: Write protecting the kernel read-only data: 36864k Feb 13 19:42:00.016253 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 19:42:00.016264 kernel: Run /init as init process Feb 13 19:42:00.016275 kernel: with arguments: Feb 13 19:42:00.016286 kernel: /init Feb 13 19:42:00.016297 kernel: with environment: Feb 13 19:42:00.016313 kernel: HOME=/ Feb 13 19:42:00.016328 kernel: TERM=linux Feb 13 19:42:00.016346 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:42:00.016372 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:42:00.016390 systemd[1]: Detected virtualization kvm. Feb 13 19:42:00.016410 systemd[1]: Detected architecture x86-64. Feb 13 19:42:00.016425 systemd[1]: Running in initrd. Feb 13 19:42:00.016437 systemd[1]: No hostname configured, using default hostname. Feb 13 19:42:00.016469 systemd[1]: Hostname set to . Feb 13 19:42:00.016481 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:42:00.016501 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:42:00.016513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:00.016525 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:00.016540 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:42:00.016551 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:42:00.016565 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:42:00.016578 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:42:00.016602 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:42:00.016615 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:42:00.016628 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:00.016640 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:00.016651 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:42:00.016663 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:42:00.016674 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:42:00.016686 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:42:00.016706 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:00.016718 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:00.016730 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:42:00.016741 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:42:00.016753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:00.016765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:00.016777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:00.016789 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:42:00.016808 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:42:00.016820 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:42:00.016832 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:42:00.016843 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:42:00.016867 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:42:00.016882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:42:00.016903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:00.016917 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:00.016929 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:00.016949 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:42:00.016998 systemd-journald[195]: Collecting audit messages is disabled. Feb 13 19:42:00.017038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:42:00.017050 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:42:00.017063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:00.017075 systemd-journald[195]: Journal started Feb 13 19:42:00.017108 systemd-journald[195]: Runtime Journal (/run/log/journal/6cb9f4c948114e18b15afc0047b085ef) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:42:00.016962 systemd-modules-load[196]: Inserted module 'overlay' Feb 13 19:42:00.028517 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:00.033137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:42:00.033213 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:42:00.041479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:42:00.054546 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:00.055271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:00.065303 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:42:00.068484 kernel: Bridge firewalling registered Feb 13 19:42:00.068832 systemd-modules-load[196]: Inserted module 'br_netfilter' Feb 13 19:42:00.071767 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:42:00.072333 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:00.076188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:00.080995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:00.100600 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:00.103332 dracut-cmdline[222]: dracut-dracut-053 Feb 13 19:42:00.109004 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 19:42:00.115862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:42:00.155132 systemd-resolved[239]: Positive Trust Anchors: Feb 13 19:42:00.155166 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:42:00.155210 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:42:00.166883 systemd-resolved[239]: Defaulting to hostname 'linux'. Feb 13 19:42:00.168754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:42:00.169198 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:00.225491 kernel: SCSI subsystem initialized Feb 13 19:42:00.239507 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:42:00.253525 kernel: iscsi: registered transport (tcp) Feb 13 19:42:00.275496 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:42:00.275588 kernel: QLogic iSCSI HBA Driver Feb 13 19:42:00.338649 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:00.344742 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:42:00.373545 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:42:00.373631 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:42:00.374625 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:42:00.417504 kernel: raid6: avx2x4 gen() 25234 MB/s Feb 13 19:42:00.434505 kernel: raid6: avx2x2 gen() 20683 MB/s Feb 13 19:42:00.451871 kernel: raid6: avx2x1 gen() 16473 MB/s Feb 13 19:42:00.451939 kernel: raid6: using algorithm avx2x4 gen() 25234 MB/s Feb 13 19:42:00.469821 kernel: raid6: .... xor() 6066 MB/s, rmw enabled Feb 13 19:42:00.469913 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:42:00.492537 kernel: xor: automatically using best checksumming function avx Feb 13 19:42:00.670493 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:42:00.688779 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:00.701667 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:00.720729 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:42:00.726419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:00.742658 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:42:00.760333 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Feb 13 19:42:00.799716 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:00.814709 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:42:00.898667 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:00.911705 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:42:00.927270 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:00.928959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:00.930916 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:00.935277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:42:00.941471 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:42:00.974624 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:42:00.974836 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:42:00.974869 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:42:00.974884 kernel: GPT:9289727 != 19775487 Feb 13 19:42:00.974897 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:42:00.974914 kernel: GPT:9289727 != 19775487 Feb 13 19:42:00.974927 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:42:00.974941 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:42:00.974954 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:42:00.974968 kernel: AES CTR mode by8 optimization enabled Feb 13 19:42:00.946641 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:42:00.956682 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:00.956826 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:00.985700 kernel: libata version 3.00 loaded. Feb 13 19:42:00.958379 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:00.992542 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:42:01.028988 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:42:01.029006 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Feb 13 19:42:01.029018 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (470) Feb 13 19:42:01.029029 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:42:01.029190 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:42:01.029348 kernel: scsi host0: ahci Feb 13 19:42:01.029542 kernel: scsi host1: ahci Feb 13 19:42:01.029706 kernel: scsi host2: ahci Feb 13 19:42:01.029858 kernel: scsi host3: ahci Feb 13 19:42:01.030026 kernel: scsi host4: ahci Feb 13 19:42:01.030190 kernel: scsi host5: ahci Feb 13 19:42:01.030350 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Feb 13 19:42:01.030362 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Feb 13 19:42:01.030373 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Feb 13 19:42:01.030384 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Feb 13 19:42:01.030394 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Feb 13 19:42:01.030404 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Feb 13 19:42:00.959909 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:00.960118 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:00.965079 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:00.982009 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:00.983749 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:01.013550 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:01.025224 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:42:01.042371 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:42:01.052553 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:42:01.054231 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:42:01.060550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:42:01.080636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:42:01.082073 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:01.082140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:01.085047 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:01.087312 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:01.095161 disk-uuid[560]: Primary Header is updated. Feb 13 19:42:01.095161 disk-uuid[560]: Secondary Entries is updated. Feb 13 19:42:01.095161 disk-uuid[560]: Secondary Header is updated. Feb 13 19:42:01.098579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:42:01.100478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:42:01.111850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:01.120682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:42:01.147254 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:01.339221 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:42:01.339314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:42:01.339329 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:42:01.339343 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:42:01.340480 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:42:01.341491 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:42:01.342484 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:42:01.343866 kernel: ata3.00: applying bridge limits Feb 13 19:42:01.343901 kernel: ata3.00: configured for UDMA/100 Feb 13 19:42:01.344489 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:42:01.392484 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:42:01.406640 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:42:01.406687 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:42:02.104482 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:42:02.104776 disk-uuid[562]: The operation has completed successfully. Feb 13 19:42:02.136917 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:42:02.137072 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:42:02.168689 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:42:02.174688 sh[599]: Success Feb 13 19:42:02.188501 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:42:02.227357 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:42:02.242599 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:42:02.245438 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:42:02.282510 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 19:42:02.282570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:02.284420 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:42:02.284445 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:42:02.285174 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:42:02.290908 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:42:02.292118 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:42:02.300915 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:42:02.303902 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:42:02.315895 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:42:02.315952 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:02.315967 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:42:02.320481 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:42:02.336010 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:42:02.339331 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:42:02.349810 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:42:02.357660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:42:02.479022 ignition[695]: Ignition 2.20.0 Feb 13 19:42:02.479039 ignition[695]: Stage: fetch-offline Feb 13 19:42:02.479096 ignition[695]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:02.479108 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:02.479230 ignition[695]: parsed url from cmdline: "" Feb 13 19:42:02.479235 ignition[695]: no config URL provided Feb 13 19:42:02.479241 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:42:02.479251 ignition[695]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:42:02.479283 ignition[695]: op(1): [started] loading QEMU firmware config module Feb 13 19:42:02.479289 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:42:02.494443 ignition[695]: op(1): [finished] loading QEMU firmware config module Feb 13 19:42:02.494490 ignition[695]: QEMU firmware config was not found. Ignoring... Feb 13 19:42:02.497142 ignition[695]: parsing config with SHA512: 17de37adddfa4341e80a176a3bf87df05ed0f82ddca2d0102d10d7d90c937dd9280b984af0f2351f610636473d87d2ba4238471c932c40de4f9203c3e9a9d7de Feb 13 19:42:02.500078 unknown[695]: fetched base config from "system" Feb 13 19:42:02.500823 unknown[695]: fetched user config from "qemu" Feb 13 19:42:02.501090 ignition[695]: fetch-offline: fetch-offline passed Feb 13 19:42:02.501174 ignition[695]: Ignition finished successfully Feb 13 19:42:02.504536 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:02.516594 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:42:02.517035 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:02.542187 systemd-networkd[788]: lo: Link UP Feb 13 19:42:02.542199 systemd-networkd[788]: lo: Gained carrier Feb 13 19:42:02.544094 systemd-networkd[788]: Enumeration completed Feb 13 19:42:02.544211 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:42:02.544542 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:02.544546 systemd-networkd[788]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:42:02.550905 systemd-networkd[788]: eth0: Link UP Feb 13 19:42:02.550910 systemd-networkd[788]: eth0: Gained carrier Feb 13 19:42:02.550920 systemd-networkd[788]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:02.551003 systemd[1]: Reached target network.target - Network. Feb 13 19:42:02.553643 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:42:02.564681 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:42:02.569529 systemd-networkd[788]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:42:02.579493 ignition[791]: Ignition 2.20.0 Feb 13 19:42:02.579505 ignition[791]: Stage: kargs Feb 13 19:42:02.579710 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:02.579723 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:02.580402 ignition[791]: kargs: kargs passed Feb 13 19:42:02.580474 ignition[791]: Ignition finished successfully Feb 13 19:42:02.603203 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:42:02.613719 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:42:02.631365 ignition[800]: Ignition 2.20.0 Feb 13 19:42:02.631387 ignition[800]: Stage: disks Feb 13 19:42:02.631601 ignition[800]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:02.631614 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:02.637526 ignition[800]: disks: disks passed Feb 13 19:42:02.638270 ignition[800]: Ignition finished successfully Feb 13 19:42:02.641384 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:42:02.642422 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:02.644128 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:42:02.646684 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:42:02.647051 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:42:02.647429 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:42:02.670650 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:42:02.685187 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.27 Feb 13 19:42:02.685203 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux4'. Feb 13 19:42:02.686354 systemd-fsck[810]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:42:02.802892 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:42:02.815691 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:42:02.920500 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 19:42:02.921089 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:42:02.922045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:42:02.938639 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:02.944430 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:42:02.947293 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:42:02.947392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:42:02.953818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (818) Feb 13 19:42:02.947432 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:02.958113 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:42:02.958137 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:02.958155 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:42:02.961485 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:42:02.974000 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:02.979239 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:42:02.980829 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:42:03.006994 systemd-resolved[239]: Detected conflict on linux4 IN A 10.0.0.27 Feb 13 19:42:03.007011 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux4' to 'linux7'. Feb 13 19:42:03.045041 initrd-setup-root[842]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:42:03.054805 initrd-setup-root[849]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:42:03.059973 initrd-setup-root[856]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:42:03.065578 initrd-setup-root[863]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:42:03.168413 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:03.195571 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:42:03.197025 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:42:03.212471 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:42:03.228016 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:42:03.282678 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:42:03.342419 ignition[935]: INFO : Ignition 2.20.0 Feb 13 19:42:03.342419 ignition[935]: INFO : Stage: mount Feb 13 19:42:03.344436 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:03.344436 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:03.344436 ignition[935]: INFO : mount: mount passed Feb 13 19:42:03.344436 ignition[935]: INFO : Ignition finished successfully Feb 13 19:42:03.347000 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:42:03.352599 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:42:03.363076 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:42:03.378469 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (945) Feb 13 19:42:03.383426 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 19:42:03.383487 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:42:03.383499 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:42:03.387460 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:42:03.389198 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:42:03.415836 ignition[962]: INFO : Ignition 2.20.0 Feb 13 19:42:03.415836 ignition[962]: INFO : Stage: files Feb 13 19:42:03.417667 ignition[962]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:03.417667 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:03.420175 ignition[962]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:42:03.422020 ignition[962]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:42:03.422020 ignition[962]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:42:03.425670 ignition[962]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:42:03.427159 ignition[962]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:42:03.428906 unknown[962]: wrote ssh authorized keys file for user: core Feb 13 19:42:03.430181 ignition[962]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:42:03.432689 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:03.434778 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:42:03.439329 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:03.441604 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:42:03.443461 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:42:03.446253 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:42:03.446253 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:42:03.451107 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:42:03.846621 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:42:04.230151 ignition[962]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:42:04.230151 ignition[962]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 19:42:04.240403 ignition[962]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:42:04.240403 ignition[962]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:42:04.240403 ignition[962]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 19:42:04.240403 ignition[962]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:42:04.285462 ignition[962]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:42:04.295131 ignition[962]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:42:04.311429 ignition[962]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:42:04.311429 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:04.311429 ignition[962]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:42:04.311429 ignition[962]: INFO : files: files passed Feb 13 19:42:04.311429 ignition[962]: INFO : Ignition finished successfully Feb 13 19:42:04.321927 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:42:04.340832 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:42:04.344499 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:42:04.349269 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:42:04.349481 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:42:04.356958 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:42:04.362442 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:04.362442 initrd-setup-root-after-ignition[994]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:04.373003 initrd-setup-root-after-ignition[998]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:42:04.365695 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:04.369971 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:42:04.386607 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:42:04.422775 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:42:04.425151 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:42:04.427948 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:42:04.445441 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:42:04.447732 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:42:04.495618 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:42:04.537818 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:04.553654 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:42:04.582992 systemd-networkd[788]: eth0: Gained IPv6LL Feb 13 19:42:04.596113 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:04.637501 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:04.638306 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:42:04.638983 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:42:04.639110 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:42:04.703541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:42:04.704285 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:42:04.704757 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:42:04.705204 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:42:04.705902 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:42:04.706363 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:42:04.707071 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:42:04.707564 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:42:04.708233 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:42:04.708914 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:42:04.709355 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:42:04.709504 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:42:04.762052 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:04.763030 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:04.763671 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:42:04.763865 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:04.769500 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:42:04.769718 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:42:04.770547 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:42:04.770671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:42:04.776301 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:42:04.776901 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:42:04.821608 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:04.825628 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:42:04.826185 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:42:04.828469 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:42:04.828622 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:42:04.870783 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:42:04.871973 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:42:04.874725 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:42:04.876296 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:42:04.879665 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:42:04.880961 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:42:04.942642 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:42:04.955012 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:42:04.956251 ignition[1018]: INFO : Ignition 2.20.0 Feb 13 19:42:04.956251 ignition[1018]: INFO : Stage: umount Feb 13 19:42:04.956251 ignition[1018]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:42:04.956251 ignition[1018]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:42:04.956251 ignition[1018]: INFO : umount: umount passed Feb 13 19:42:04.956251 ignition[1018]: INFO : Ignition finished successfully Feb 13 19:42:04.956253 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:04.965116 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:42:04.967245 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:42:04.968642 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:04.971844 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:42:04.973216 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:42:04.978751 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:42:04.980016 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:42:04.985077 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:42:04.986479 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:42:04.990613 systemd[1]: Stopped target network.target - Network. Feb 13 19:42:04.992925 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:42:04.994289 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:42:04.995810 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:42:04.995891 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:42:05.000165 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:42:05.001479 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:42:05.004986 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:42:05.005070 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:42:05.009114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:42:05.011690 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:42:05.015914 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:42:05.019253 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:42:05.020728 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:42:05.023573 systemd-networkd[788]: eth0: DHCPv6 lease lost Feb 13 19:42:05.025629 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:42:05.026868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:42:05.030439 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:42:05.030609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:05.044729 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:42:05.046748 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:42:05.046842 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:42:05.048880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:42:05.048949 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:05.049192 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:42:05.049237 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:05.049966 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:42:05.050037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:05.050545 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:05.074939 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:42:05.076055 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:42:05.087627 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:42:05.107002 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:05.110565 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:42:05.111703 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:05.114115 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:42:05.114167 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:05.117235 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:42:05.117320 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:42:05.135677 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:42:05.135753 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:42:05.139156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:42:05.140350 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:42:05.189837 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:42:05.192466 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:42:05.193686 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:05.196528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:05.197673 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:05.213298 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:42:05.214552 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:42:05.307402 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:42:05.307591 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:42:05.315556 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:42:05.317798 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:42:05.317864 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:42:05.331603 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:42:05.345289 systemd[1]: Switching root. Feb 13 19:42:05.378138 systemd-journald[195]: Journal stopped Feb 13 19:42:06.746824 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Feb 13 19:42:06.746898 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:42:06.746913 kernel: SELinux: policy capability open_perms=1 Feb 13 19:42:06.746930 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:42:06.746942 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:42:06.746953 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:42:06.746965 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:42:06.746976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:42:06.746990 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:42:06.747001 kernel: audit: type=1403 audit(1739475725.928:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:42:06.747016 systemd[1]: Successfully loaded SELinux policy in 47.414ms. Feb 13 19:42:06.747042 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.859ms. Feb 13 19:42:06.747055 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:42:06.747068 systemd[1]: Detected virtualization kvm. Feb 13 19:42:06.747080 systemd[1]: Detected architecture x86-64. Feb 13 19:42:06.747092 systemd[1]: Detected first boot. Feb 13 19:42:06.747109 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:42:06.747121 zram_generator::config[1062]: No configuration found. Feb 13 19:42:06.747137 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:42:06.747149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:42:06.747161 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:42:06.747174 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:42:06.747187 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:42:06.747199 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:42:06.747212 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:42:06.747224 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:42:06.747236 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:42:06.747252 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:42:06.747284 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:42:06.747301 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:42:06.747314 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:42:06.747326 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:42:06.747339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:42:06.747351 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:42:06.747364 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:42:06.747376 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:42:06.747392 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:42:06.747404 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:42:06.747417 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:42:06.747429 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:42:06.747442 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:42:06.747467 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:42:06.747480 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:42:06.747495 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:42:06.747507 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:42:06.747520 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:42:06.747532 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:42:06.747544 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:42:06.747558 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:42:06.747570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:42:06.747584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:42:06.747596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:42:06.747608 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:42:06.747623 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:42:06.747635 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:42:06.747647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:06.747660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:42:06.747672 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:42:06.747684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:42:06.747700 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:42:06.747717 systemd[1]: Reached target machines.target - Containers. Feb 13 19:42:06.747738 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:42:06.747755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:06.747771 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:42:06.747787 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:42:06.747803 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:06.747819 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:42:06.747835 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:06.747850 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:42:06.747866 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:06.747887 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:42:06.747905 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:42:06.747920 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:42:06.747935 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:42:06.747950 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:42:06.747966 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:42:06.747982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:42:06.747998 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:42:06.748016 kernel: loop: module loaded Feb 13 19:42:06.748031 kernel: fuse: init (API version 7.39) Feb 13 19:42:06.748047 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:42:06.748063 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:42:06.748082 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:42:06.748101 systemd[1]: Stopped verity-setup.service. Feb 13 19:42:06.748135 systemd-journald[1125]: Collecting audit messages is disabled. Feb 13 19:42:06.748158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:06.748174 systemd-journald[1125]: Journal started Feb 13 19:42:06.748195 systemd-journald[1125]: Runtime Journal (/run/log/journal/6cb9f4c948114e18b15afc0047b085ef) is 6.0M, max 48.3M, 42.2M free. Feb 13 19:42:06.503647 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:42:06.526693 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:42:06.527244 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:42:06.752548 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:42:06.758939 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:42:06.760472 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:42:06.761876 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:42:06.763130 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:42:06.764806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:42:06.766308 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:42:06.768023 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:42:06.770465 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:42:06.770730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:42:06.773635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:06.773922 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:06.774484 kernel: ACPI: bus type drm_connector registered Feb 13 19:42:06.776727 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:42:06.776989 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:42:06.778842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:06.779084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:06.781344 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:42:06.781596 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:42:06.783496 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:06.783739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:06.785547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:42:06.793139 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:42:06.795389 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:42:06.813202 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:42:06.835559 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:42:06.838098 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:42:06.839507 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:42:06.839542 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:42:06.841564 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:42:06.843952 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:42:06.847503 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:42:06.848839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:06.851599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:42:06.852952 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:42:06.854219 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:42:06.856682 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:42:06.859521 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:42:06.860820 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:06.863860 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:42:06.868049 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:42:06.869784 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:42:06.871824 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:42:06.873641 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:42:06.908828 systemd-journald[1125]: Time spent on flushing to /var/log/journal/6cb9f4c948114e18b15afc0047b085ef is 17.498ms for 1035 entries. Feb 13 19:42:06.908828 systemd-journald[1125]: System Journal (/var/log/journal/6cb9f4c948114e18b15afc0047b085ef) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:42:07.410579 systemd-journald[1125]: Received client request to flush runtime journal. Feb 13 19:42:07.410666 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 19:42:07.410704 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:42:07.410730 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:42:07.410757 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:42:06.918227 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:42:06.988478 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:42:07.059311 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:07.161990 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:42:07.163822 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:42:07.176773 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:42:07.353132 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:42:07.388824 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:42:07.412490 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:42:07.467496 kernel: loop3: detected capacity change from 0 to 140992 Feb 13 19:42:07.507294 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:42:07.508288 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:42:07.511222 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:42:07.517493 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:42:07.526737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:42:07.529470 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 19:42:07.542389 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:42:07.543131 (sd-merge)[1195]: Merged extensions into '/usr'. Feb 13 19:42:07.548040 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:42:07.548056 systemd[1]: Reloading... Feb 13 19:42:07.560426 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:42:07.560461 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 19:42:07.617499 zram_generator::config[1225]: No configuration found. Feb 13 19:42:07.685196 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:42:07.749816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:07.808994 systemd[1]: Reloading finished in 260 ms. Feb 13 19:42:07.841961 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:42:07.858664 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:42:07.860441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:42:07.878875 systemd[1]: Starting ensure-sysext.service... Feb 13 19:42:07.881303 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:42:07.891836 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:42:07.891951 systemd[1]: Reloading... Feb 13 19:42:07.910510 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:42:07.911272 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:42:07.912257 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:42:07.912653 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:42:07.912734 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:42:07.916515 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:42:07.916634 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:42:07.930713 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:42:07.931331 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:42:07.974494 zram_generator::config[1295]: No configuration found. Feb 13 19:42:08.167929 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:08.221821 systemd[1]: Reloading finished in 329 ms. Feb 13 19:42:08.243851 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:42:08.257960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:42:08.265706 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:42:08.268536 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:42:08.271766 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:42:08.277640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:42:08.281602 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:42:08.288645 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:42:08.295011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.295307 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:08.299756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:08.305793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:08.311411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:08.312816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:08.323984 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:42:08.327022 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 19:42:08.327498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.329094 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:42:08.331475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:08.331710 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:08.334441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:08.334885 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:08.337165 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:08.337437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:08.353027 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.353354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:08.362864 augenrules[1365]: No rules Feb 13 19:42:08.363760 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:08.366559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:08.369306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:08.370786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:08.372645 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:42:08.375502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.376430 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:42:08.378729 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:42:08.379014 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:42:08.381032 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:42:08.383142 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:08.383383 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:08.385351 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:42:08.388416 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:42:08.399063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:08.399362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:08.401851 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:08.402104 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:08.404715 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:42:08.426482 systemd[1]: Finished ensure-sysext.service. Feb 13 19:42:08.450775 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:42:08.451930 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.460541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1381) Feb 13 19:42:08.461718 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:42:08.463836 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:42:08.465819 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:42:08.470030 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:42:08.472553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:42:08.478990 systemd-resolved[1334]: Positive Trust Anchors: Feb 13 19:42:08.479012 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:42:08.479045 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:42:08.483689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:42:08.485283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:42:08.489530 systemd-resolved[1334]: Defaulting to hostname 'linux'. Feb 13 19:42:08.491834 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:42:08.496771 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:42:08.498186 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:42:08.498241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:42:08.498889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:42:08.500783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:42:08.501081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:42:08.503057 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:42:08.503305 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:42:08.505431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:42:08.505715 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:42:08.507704 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:42:08.507958 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:42:08.518074 augenrules[1405]: /sbin/augenrules: No change Feb 13 19:42:08.530895 augenrules[1438]: No rules Feb 13 19:42:08.533643 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:42:08.534069 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:42:08.536694 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:42:08.538120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:42:08.538197 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:42:08.543810 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:42:08.552833 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:42:08.565527 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:42:08.571864 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:42:08.581101 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:42:08.612072 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:42:08.612139 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Feb 13 19:42:08.612411 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:42:08.612635 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:42:08.612888 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:42:08.602429 systemd-networkd[1417]: lo: Link UP Feb 13 19:42:08.602440 systemd-networkd[1417]: lo: Gained carrier Feb 13 19:42:08.604423 systemd-networkd[1417]: Enumeration completed Feb 13 19:42:08.604598 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:42:08.605138 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:08.605144 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:42:08.606431 systemd-networkd[1417]: eth0: Link UP Feb 13 19:42:08.606436 systemd-networkd[1417]: eth0: Gained carrier Feb 13 19:42:08.606486 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:42:08.609780 systemd[1]: Reached target network.target - Network. Feb 13 19:42:08.619530 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:42:08.619723 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:42:08.644259 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:42:08.646728 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:42:09.280918 systemd-timesyncd[1420]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:42:09.280987 systemd-timesyncd[1420]: Initial clock synchronization to Thu 2025-02-13 19:42:09.280768 UTC. Feb 13 19:42:09.281051 systemd-resolved[1334]: Clock change detected. Flushing caches. Feb 13 19:42:09.319290 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:42:09.333008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:09.340355 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:42:09.341520 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:09.345452 kernel: kvm_amd: TSC scaling supported Feb 13 19:42:09.345490 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:42:09.345530 kernel: kvm_amd: Nested Paging enabled Feb 13 19:42:09.345550 kernel: kvm_amd: LBR virtualization supported Feb 13 19:42:09.346529 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:42:09.346575 kernel: kvm_amd: Virtual GIF supported Feb 13 19:42:09.359737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:42:09.372691 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:42:09.414290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:42:09.424502 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:42:09.437481 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:42:09.447580 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:42:09.480955 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:42:09.482810 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:42:09.484130 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:42:09.485499 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:42:09.501770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:42:09.503622 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:42:09.504930 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:42:09.506314 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:42:09.507641 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:42:09.507680 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:42:09.508716 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:42:09.510872 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:42:09.514371 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:42:09.524965 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:42:09.528264 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:42:09.530262 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:42:09.531594 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:42:09.532772 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:42:09.533610 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:42:09.533659 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:42:09.535401 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:42:09.538474 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:42:09.542262 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:42:09.542774 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:42:09.547345 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:42:09.548738 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:42:09.550801 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:42:09.555976 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:42:09.561520 jq[1475]: false Feb 13 19:42:09.562209 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:42:09.573716 extend-filesystems[1476]: Found loop3 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found loop4 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found loop5 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found sr0 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found vda Feb 13 19:42:09.573716 extend-filesystems[1476]: Found vda1 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found vda2 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found vda3 Feb 13 19:42:09.573716 extend-filesystems[1476]: Found usr Feb 13 19:42:09.595476 extend-filesystems[1476]: Found vda4 Feb 13 19:42:09.595476 extend-filesystems[1476]: Found vda6 Feb 13 19:42:09.595476 extend-filesystems[1476]: Found vda7 Feb 13 19:42:09.595476 extend-filesystems[1476]: Found vda9 Feb 13 19:42:09.595476 extend-filesystems[1476]: Checking size of /dev/vda9 Feb 13 19:42:09.607597 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:42:09.578779 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:42:09.588936 dbus-daemon[1474]: [system] SELinux support is enabled Feb 13 19:42:09.612564 extend-filesystems[1476]: Resized partition /dev/vda9 Feb 13 19:42:09.580877 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:42:09.614083 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:42:09.621398 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1403) Feb 13 19:42:09.581620 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:42:09.583931 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:42:09.621751 update_engine[1489]: I20250213 19:42:09.616581 1489 main.cc:92] Flatcar Update Engine starting Feb 13 19:42:09.591524 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:42:09.593440 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:42:09.599972 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:42:09.612953 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:42:09.613302 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:42:09.613801 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:42:09.614096 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:42:09.617860 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:42:09.618164 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:42:09.624494 jq[1491]: true Feb 13 19:42:09.629893 update_engine[1489]: I20250213 19:42:09.629828 1489 update_check_scheduler.cc:74] Next update check in 7m19s Feb 13 19:42:09.643259 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:42:09.647135 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:42:09.654966 jq[1499]: true Feb 13 19:42:09.663047 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:42:09.663047 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:42:09.663047 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:42:09.668097 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:42:09.671961 extend-filesystems[1476]: Resized filesystem in /dev/vda9 Feb 13 19:42:09.669691 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:42:09.681567 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:42:09.689264 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:42:09.689302 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:42:09.690758 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:42:09.690782 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:42:09.693909 systemd-logind[1488]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:42:09.693944 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:42:09.696962 systemd-logind[1488]: New seat seat0. Feb 13 19:42:09.701545 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:42:09.702849 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:42:09.735827 bash[1526]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:42:09.738392 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:42:09.741754 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:42:09.744862 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:42:09.793806 sshd_keygen[1497]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:42:09.820072 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:42:09.829705 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:42:09.842933 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:42:09.843250 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:42:09.855923 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:42:09.869576 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:42:09.873060 containerd[1501]: time="2025-02-13T19:42:09.871274055Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:42:09.884741 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:42:09.887445 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:42:09.888909 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:42:09.896850 containerd[1501]: time="2025-02-13T19:42:09.896769178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899101 containerd[1501]: time="2025-02-13T19:42:09.899031310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899101 containerd[1501]: time="2025-02-13T19:42:09.899084790Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:42:09.899164 containerd[1501]: time="2025-02-13T19:42:09.899104898Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:42:09.899414 containerd[1501]: time="2025-02-13T19:42:09.899380535Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:42:09.899484 containerd[1501]: time="2025-02-13T19:42:09.899421341Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899521 containerd[1501]: time="2025-02-13T19:42:09.899502954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899543 containerd[1501]: time="2025-02-13T19:42:09.899519616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899798 containerd[1501]: time="2025-02-13T19:42:09.899762321Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899798 containerd[1501]: time="2025-02-13T19:42:09.899789031Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899873 containerd[1501]: time="2025-02-13T19:42:09.899808607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899873 containerd[1501]: time="2025-02-13T19:42:09.899822113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.899986 containerd[1501]: time="2025-02-13T19:42:09.899944973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.900268 containerd[1501]: time="2025-02-13T19:42:09.900212575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:42:09.900404 containerd[1501]: time="2025-02-13T19:42:09.900373316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:42:09.900404 containerd[1501]: time="2025-02-13T19:42:09.900394416Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:42:09.900565 containerd[1501]: time="2025-02-13T19:42:09.900519831Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:42:09.900614 containerd[1501]: time="2025-02-13T19:42:09.900594150Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:42:09.908118 containerd[1501]: time="2025-02-13T19:42:09.908033148Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:42:09.908118 containerd[1501]: time="2025-02-13T19:42:09.908142073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:42:09.908385 containerd[1501]: time="2025-02-13T19:42:09.908166979Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:42:09.908385 containerd[1501]: time="2025-02-13T19:42:09.908189672Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:42:09.908385 containerd[1501]: time="2025-02-13T19:42:09.908208577Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:42:09.908610 containerd[1501]: time="2025-02-13T19:42:09.908583861Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:42:09.911104 containerd[1501]: time="2025-02-13T19:42:09.911040637Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:42:09.911288 containerd[1501]: time="2025-02-13T19:42:09.911231505Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:42:09.911288 containerd[1501]: time="2025-02-13T19:42:09.911270488Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:42:09.911334 containerd[1501]: time="2025-02-13T19:42:09.911289724Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:42:09.911334 containerd[1501]: time="2025-02-13T19:42:09.911309542Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911334 containerd[1501]: time="2025-02-13T19:42:09.911325862Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911341612Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911359906Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911379142Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911404279Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911419999Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911440 containerd[1501]: time="2025-02-13T19:42:09.911434957Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911461326Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911480472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911496342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911512723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911528051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911544452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911569 containerd[1501]: time="2025-02-13T19:42:09.911559480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911575981Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911594817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911625805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911649589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911668635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911685186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911703691Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911728818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911747633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911783 containerd[1501]: time="2025-02-13T19:42:09.911762361Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911828845Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911872076Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911888758Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911905790Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911921148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911954381Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911971353Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:42:09.911999 containerd[1501]: time="2025-02-13T19:42:09.911997401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:42:09.912428 containerd[1501]: time="2025-02-13T19:42:09.912358779Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:42:09.912808 containerd[1501]: time="2025-02-13T19:42:09.912427538Z" level=info msg="Connect containerd service" Feb 13 19:42:09.912808 containerd[1501]: time="2025-02-13T19:42:09.912467643Z" level=info msg="using legacy CRI server" Feb 13 19:42:09.912808 containerd[1501]: time="2025-02-13T19:42:09.912476770Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:42:09.912808 containerd[1501]: time="2025-02-13T19:42:09.912603057Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:42:09.913372 containerd[1501]: time="2025-02-13T19:42:09.913330501Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:42:09.913734 containerd[1501]: time="2025-02-13T19:42:09.913659307Z" level=info msg="Start subscribing containerd event" Feb 13 19:42:09.913771 containerd[1501]: time="2025-02-13T19:42:09.913739468Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:42:09.913808 containerd[1501]: time="2025-02-13T19:42:09.913769614Z" level=info msg="Start recovering state" Feb 13 19:42:09.913830 containerd[1501]: time="2025-02-13T19:42:09.913804249Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:42:09.913934 containerd[1501]: time="2025-02-13T19:42:09.913897464Z" level=info msg="Start event monitor" Feb 13 19:42:09.914310 containerd[1501]: time="2025-02-13T19:42:09.914272257Z" level=info msg="Start snapshots syncer" Feb 13 19:42:09.914310 containerd[1501]: time="2025-02-13T19:42:09.914302123Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:42:09.914354 containerd[1501]: time="2025-02-13T19:42:09.914312683Z" level=info msg="Start streaming server" Feb 13 19:42:09.914520 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:42:09.914910 containerd[1501]: time="2025-02-13T19:42:09.914590113Z" level=info msg="containerd successfully booted in 0.045915s" Feb 13 19:42:10.709594 systemd-networkd[1417]: eth0: Gained IPv6LL Feb 13 19:42:10.714370 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:42:10.716733 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:42:10.731792 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:42:10.735706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:10.739064 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:42:10.762531 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:42:10.762939 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:42:10.765088 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:42:10.768701 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:42:11.511173 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:11.513008 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:42:11.517386 (kubelet)[1579]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:42:11.518385 systemd[1]: Startup finished in 1.491s (kernel) + 6.190s (initrd) + 5.009s (userspace) = 12.691s. Feb 13 19:42:11.993396 kubelet[1579]: E0213 19:42:11.993190 1579 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:42:11.998320 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:42:11.998549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:42:11.998959 systemd[1]: kubelet.service: Consumed 1.092s CPU time. Feb 13 19:42:14.302564 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:42:14.304553 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:45368.service - OpenSSH per-connection server daemon (10.0.0.1:45368). Feb 13 19:42:14.372521 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 45368 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:14.375003 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:14.384906 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:42:14.395858 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:42:14.398510 systemd-logind[1488]: New session 1 of user core. Feb 13 19:42:14.413645 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:42:14.427952 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:42:14.431745 (systemd)[1596]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:42:14.586937 systemd[1596]: Queued start job for default target default.target. Feb 13 19:42:14.597043 systemd[1596]: Created slice app.slice - User Application Slice. Feb 13 19:42:14.597075 systemd[1596]: Reached target paths.target - Paths. Feb 13 19:42:14.597091 systemd[1596]: Reached target timers.target - Timers. Feb 13 19:42:14.599193 systemd[1596]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:42:14.612892 systemd[1596]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:42:14.613125 systemd[1596]: Reached target sockets.target - Sockets. Feb 13 19:42:14.613156 systemd[1596]: Reached target basic.target - Basic System. Feb 13 19:42:14.613221 systemd[1596]: Reached target default.target - Main User Target. Feb 13 19:42:14.613295 systemd[1596]: Startup finished in 172ms. Feb 13 19:42:14.613873 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:42:14.615962 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:42:14.683087 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:45382.service - OpenSSH per-connection server daemon (10.0.0.1:45382). Feb 13 19:42:14.730367 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 45382 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:14.732416 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:14.737107 systemd-logind[1488]: New session 2 of user core. Feb 13 19:42:14.746418 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:42:14.803430 sshd[1609]: Connection closed by 10.0.0.1 port 45382 Feb 13 19:42:14.804038 sshd-session[1607]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:14.819652 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:45382.service: Deactivated successfully. Feb 13 19:42:14.821850 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:42:14.823866 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:42:14.834636 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:45390.service - OpenSSH per-connection server daemon (10.0.0.1:45390). Feb 13 19:42:14.835902 systemd-logind[1488]: Removed session 2. Feb 13 19:42:14.870253 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 45390 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:14.872078 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:14.877200 systemd-logind[1488]: New session 3 of user core. Feb 13 19:42:14.886457 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:42:14.936844 sshd[1616]: Connection closed by 10.0.0.1 port 45390 Feb 13 19:42:14.937209 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:14.954145 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:45390.service: Deactivated successfully. Feb 13 19:42:14.956020 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:42:14.957404 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:42:14.958732 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:45392.service - OpenSSH per-connection server daemon (10.0.0.1:45392). Feb 13 19:42:14.959457 systemd-logind[1488]: Removed session 3. Feb 13 19:42:14.997744 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 45392 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:14.999611 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:15.003907 systemd-logind[1488]: New session 4 of user core. Feb 13 19:42:15.013490 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:42:15.071014 sshd[1623]: Connection closed by 10.0.0.1 port 45392 Feb 13 19:42:15.071458 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:15.081469 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:45392.service: Deactivated successfully. Feb 13 19:42:15.083772 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:42:15.085556 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:42:15.095519 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:45400.service - OpenSSH per-connection server daemon (10.0.0.1:45400). Feb 13 19:42:15.096420 systemd-logind[1488]: Removed session 4. Feb 13 19:42:15.128852 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 45400 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:15.130578 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:15.134889 systemd-logind[1488]: New session 5 of user core. Feb 13 19:42:15.151373 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:42:15.212727 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:42:15.213110 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:42:15.232915 sudo[1631]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:15.234688 sshd[1630]: Connection closed by 10.0.0.1 port 45400 Feb 13 19:42:15.235181 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:15.249893 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:45400.service: Deactivated successfully. Feb 13 19:42:15.251836 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:42:15.253523 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:42:15.255108 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:45410.service - OpenSSH per-connection server daemon (10.0.0.1:45410). Feb 13 19:42:15.256201 systemd-logind[1488]: Removed session 5. Feb 13 19:42:15.294938 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 45410 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:15.296908 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:15.301359 systemd-logind[1488]: New session 6 of user core. Feb 13 19:42:15.318462 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:42:15.373416 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:42:15.373769 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:42:15.377676 sudo[1640]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:15.384135 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:42:15.384580 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:42:15.404617 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:42:15.437857 augenrules[1662]: No rules Feb 13 19:42:15.439973 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:42:15.440222 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:42:15.441679 sudo[1639]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:15.443439 sshd[1638]: Connection closed by 10.0.0.1 port 45410 Feb 13 19:42:15.443903 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:15.454783 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:45410.service: Deactivated successfully. Feb 13 19:42:15.457104 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:42:15.458686 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:42:15.468668 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:45424.service - OpenSSH per-connection server daemon (10.0.0.1:45424). Feb 13 19:42:15.469793 systemd-logind[1488]: Removed session 6. Feb 13 19:42:15.503915 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 45424 ssh2: RSA SHA256:8WP2kqV5KzwZsuVRMXRFZkAHZWbkdD5kizbT2H+wOcw Feb 13 19:42:15.505590 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:15.510651 systemd-logind[1488]: New session 7 of user core. Feb 13 19:42:15.520528 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:42:15.576698 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:42:15.577195 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:42:15.600560 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:42:15.622257 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:42:15.622577 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:42:16.286049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:16.286282 systemd[1]: kubelet.service: Consumed 1.092s CPU time. Feb 13 19:42:16.296485 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:16.323171 systemd[1]: Reloading requested from client PID 1715 ('systemctl') (unit session-7.scope)... Feb 13 19:42:16.323202 systemd[1]: Reloading... Feb 13 19:42:16.423270 zram_generator::config[1753]: No configuration found. Feb 13 19:42:17.840109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:17.925631 systemd[1]: Reloading finished in 1601 ms. Feb 13 19:42:17.984574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:17.986511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:17.990159 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:42:17.990424 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:17.992059 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:18.161112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:18.166819 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:42:18.211113 kubelet[1803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:18.211113 kubelet[1803]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:42:18.211113 kubelet[1803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:18.211622 kubelet[1803]: I0213 19:42:18.211161 1803 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:42:18.547225 kubelet[1803]: I0213 19:42:18.547088 1803 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:42:18.547225 kubelet[1803]: I0213 19:42:18.547133 1803 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:42:18.547522 kubelet[1803]: I0213 19:42:18.547495 1803 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:42:18.570254 kubelet[1803]: I0213 19:42:18.570207 1803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:42:18.578535 kubelet[1803]: E0213 19:42:18.578489 1803 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:42:18.578535 kubelet[1803]: I0213 19:42:18.578531 1803 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:42:18.584577 kubelet[1803]: I0213 19:42:18.584547 1803 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:42:18.585366 kubelet[1803]: I0213 19:42:18.585313 1803 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:42:18.585523 kubelet[1803]: I0213 19:42:18.585354 1803 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.27","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:42:18.585523 kubelet[1803]: I0213 19:42:18.585521 1803 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:42:18.585680 kubelet[1803]: I0213 19:42:18.585531 1803 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:42:18.585680 kubelet[1803]: I0213 19:42:18.585677 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:18.590177 kubelet[1803]: I0213 19:42:18.590117 1803 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:42:18.590177 kubelet[1803]: I0213 19:42:18.590158 1803 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:42:18.590177 kubelet[1803]: I0213 19:42:18.590188 1803 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:42:18.590493 kubelet[1803]: I0213 19:42:18.590203 1803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:42:18.591603 kubelet[1803]: E0213 19:42:18.591205 1803 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:18.591603 kubelet[1803]: E0213 19:42:18.591562 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:18.593611 kubelet[1803]: I0213 19:42:18.593589 1803 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:42:18.594088 kubelet[1803]: I0213 19:42:18.594063 1803 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:42:18.594610 kubelet[1803]: W0213 19:42:18.594588 1803 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:42:18.596280 kubelet[1803]: W0213 19:42:18.596245 1803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.27" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:42:18.596376 kubelet[1803]: E0213 19:42:18.596341 1803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.27\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:42:18.596451 kubelet[1803]: W0213 19:42:18.596260 1803 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:42:18.596519 kubelet[1803]: E0213 19:42:18.596451 1803 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:42:18.596597 kubelet[1803]: I0213 19:42:18.596575 1803 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:42:18.596638 kubelet[1803]: I0213 19:42:18.596619 1803 server.go:1287] "Started kubelet" Feb 13 19:42:18.598680 kubelet[1803]: I0213 19:42:18.597488 1803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:42:18.598680 kubelet[1803]: I0213 19:42:18.597910 1803 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:42:18.598680 kubelet[1803]: I0213 19:42:18.597976 1803 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:42:18.598680 kubelet[1803]: I0213 19:42:18.598198 1803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:42:18.598862 kubelet[1803]: I0213 19:42:18.598840 1803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:42:18.599497 kubelet[1803]: I0213 19:42:18.599063 1803 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:42:18.601260 kubelet[1803]: E0213 19:42:18.600607 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:18.601260 kubelet[1803]: I0213 19:42:18.600689 1803 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:42:18.601260 kubelet[1803]: I0213 19:42:18.600816 1803 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:42:18.601260 kubelet[1803]: I0213 19:42:18.600878 1803 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:42:18.602258 kubelet[1803]: I0213 19:42:18.601909 1803 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:42:18.602258 kubelet[1803]: I0213 19:42:18.602003 1803 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:42:18.603040 kubelet[1803]: E0213 19:42:18.603017 1803 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:42:18.603160 kubelet[1803]: I0213 19:42:18.603097 1803 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:42:18.612391 kubelet[1803]: E0213 19:42:18.612368 1803 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.27\" not found" node="10.0.0.27" Feb 13 19:42:18.614613 kubelet[1803]: I0213 19:42:18.614590 1803 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:42:18.614613 kubelet[1803]: I0213 19:42:18.614608 1803 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:42:18.614719 kubelet[1803]: I0213 19:42:18.614627 1803 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:18.700792 kubelet[1803]: E0213 19:42:18.700713 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:18.801441 kubelet[1803]: E0213 19:42:18.801232 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:18.901894 kubelet[1803]: E0213 19:42:18.901814 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:18.951532 kubelet[1803]: I0213 19:42:18.951466 1803 policy_none.go:49] "None policy: Start" Feb 13 19:42:18.951532 kubelet[1803]: I0213 19:42:18.951536 1803 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:42:18.951698 kubelet[1803]: I0213 19:42:18.951558 1803 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:42:18.961527 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:42:18.970305 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:42:18.973878 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:42:18.983359 kubelet[1803]: I0213 19:42:18.983323 1803 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:42:18.985017 kubelet[1803]: I0213 19:42:18.983382 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:42:18.985017 kubelet[1803]: I0213 19:42:18.983711 1803 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:42:18.985017 kubelet[1803]: I0213 19:42:18.983736 1803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:42:18.985017 kubelet[1803]: I0213 19:42:18.984110 1803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:42:18.985593 kubelet[1803]: I0213 19:42:18.985205 1803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:42:18.985593 kubelet[1803]: I0213 19:42:18.985228 1803 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:42:18.985593 kubelet[1803]: I0213 19:42:18.985263 1803 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:42:18.985593 kubelet[1803]: I0213 19:42:18.985273 1803 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:42:18.985593 kubelet[1803]: E0213 19:42:18.985404 1803 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:42:18.985899 kubelet[1803]: E0213 19:42:18.985878 1803 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:42:18.986228 kubelet[1803]: E0213 19:42:18.986202 1803 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.27\" not found" Feb 13 19:42:19.085515 kubelet[1803]: I0213 19:42:19.085383 1803 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.27" Feb 13 19:42:19.090612 kubelet[1803]: I0213 19:42:19.090588 1803 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.27" Feb 13 19:42:19.090612 kubelet[1803]: E0213 19:42:19.090610 1803 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.27\": node \"10.0.0.27\" not found" Feb 13 19:42:19.093831 kubelet[1803]: E0213 19:42:19.093810 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.188798 sudo[1673]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:19.190346 sshd[1672]: Connection closed by 10.0.0.1 port 45424 Feb 13 19:42:19.190743 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:19.194368 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:45424.service: Deactivated successfully. Feb 13 19:42:19.194500 kubelet[1803]: E0213 19:42:19.194473 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.196269 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:42:19.196894 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:42:19.197919 systemd-logind[1488]: Removed session 7. Feb 13 19:42:19.294998 kubelet[1803]: E0213 19:42:19.294947 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.395872 kubelet[1803]: E0213 19:42:19.395674 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.496403 kubelet[1803]: E0213 19:42:19.496327 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.550070 kubelet[1803]: I0213 19:42:19.550007 1803 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:42:19.550277 kubelet[1803]: W0213 19:42:19.550225 1803 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:42:19.550277 kubelet[1803]: W0213 19:42:19.550225 1803 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:42:19.592696 kubelet[1803]: E0213 19:42:19.592631 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:19.597150 kubelet[1803]: E0213 19:42:19.597081 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.698306 kubelet[1803]: E0213 19:42:19.698070 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.798871 kubelet[1803]: E0213 19:42:19.798721 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:19.899603 kubelet[1803]: E0213 19:42:19.899445 1803 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.27\" not found" Feb 13 19:42:20.001524 kubelet[1803]: I0213 19:42:20.001374 1803 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:42:20.002011 containerd[1501]: time="2025-02-13T19:42:20.001934230Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:42:20.002461 kubelet[1803]: I0213 19:42:20.002313 1803 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:42:20.592625 kubelet[1803]: I0213 19:42:20.592551 1803 apiserver.go:52] "Watching apiserver" Feb 13 19:42:20.593159 kubelet[1803]: E0213 19:42:20.592838 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:20.620664 systemd[1]: Created slice kubepods-besteffort-pod1ec8dfc4_0d85_4282_b42a_85ef2f5ee55c.slice - libcontainer container kubepods-besteffort-pod1ec8dfc4_0d85_4282_b42a_85ef2f5ee55c.slice. Feb 13 19:42:20.630698 systemd[1]: Created slice kubepods-burstable-pod5d101c34_7c09_44b3_b835_f33a284d43df.slice - libcontainer container kubepods-burstable-pod5d101c34_7c09_44b3_b835_f33a284d43df.slice. Feb 13 19:42:20.702651 kubelet[1803]: I0213 19:42:20.702566 1803 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:42:20.713435 kubelet[1803]: I0213 19:42:20.713369 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-etc-cni-netd\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713435 kubelet[1803]: I0213 19:42:20.713427 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d101c34-7c09-44b3-b835-f33a284d43df-clustermesh-secrets\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713435 kubelet[1803]: I0213 19:42:20.713456 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-hubble-tls\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713483 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpc62\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-kube-api-access-qpc62\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713506 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-bpf-maps\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713545 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cni-path\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713584 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c-xtables-lock\") pod \"kube-proxy-qgk8k\" (UID: \"1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c\") " pod="kube-system/kube-proxy-qgk8k" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713604 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-hostproc\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713684 kubelet[1803]: I0213 19:42:20.713623 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-lib-modules\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713836 kubelet[1803]: I0213 19:42:20.713643 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-net\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713836 kubelet[1803]: I0213 19:42:20.713676 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85pg\" (UniqueName: \"kubernetes.io/projected/1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c-kube-api-access-l85pg\") pod \"kube-proxy-qgk8k\" (UID: \"1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c\") " pod="kube-system/kube-proxy-qgk8k" Feb 13 19:42:20.713836 kubelet[1803]: I0213 19:42:20.713701 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-cgroup\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713836 kubelet[1803]: I0213 19:42:20.713724 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-xtables-lock\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713836 kubelet[1803]: I0213 19:42:20.713743 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-kernel\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713963 kubelet[1803]: I0213 19:42:20.713780 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c-kube-proxy\") pod \"kube-proxy-qgk8k\" (UID: \"1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c\") " pod="kube-system/kube-proxy-qgk8k" Feb 13 19:42:20.713963 kubelet[1803]: I0213 19:42:20.713809 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c-lib-modules\") pod \"kube-proxy-qgk8k\" (UID: \"1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c\") " pod="kube-system/kube-proxy-qgk8k" Feb 13 19:42:20.713963 kubelet[1803]: I0213 19:42:20.713827 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-run\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.713963 kubelet[1803]: I0213 19:42:20.713844 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-config-path\") pod \"cilium-5krpg\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " pod="kube-system/cilium-5krpg" Feb 13 19:42:20.929970 kubelet[1803]: E0213 19:42:20.929804 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.930709 containerd[1501]: time="2025-02-13T19:42:20.930671492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgk8k,Uid:1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:20.945377 kubelet[1803]: E0213 19:42:20.945335 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:20.946061 containerd[1501]: time="2025-02-13T19:42:20.946018448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5krpg,Uid:5d101c34-7c09-44b3-b835-f33a284d43df,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:21.594078 kubelet[1803]: E0213 19:42:21.594000 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:22.406074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580972519.mount: Deactivated successfully. Feb 13 19:42:22.415524 containerd[1501]: time="2025-02-13T19:42:22.415471943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:22.418050 containerd[1501]: time="2025-02-13T19:42:22.417998190Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:42:22.418959 containerd[1501]: time="2025-02-13T19:42:22.418917503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:22.420387 containerd[1501]: time="2025-02-13T19:42:22.420353987Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:22.421589 containerd[1501]: time="2025-02-13T19:42:22.421546122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:42:22.424635 containerd[1501]: time="2025-02-13T19:42:22.424569871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:22.425291 containerd[1501]: time="2025-02-13T19:42:22.425264133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.494461645s" Feb 13 19:42:22.427522 containerd[1501]: time="2025-02-13T19:42:22.427488584Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.481364608s" Feb 13 19:42:22.542793 containerd[1501]: time="2025-02-13T19:42:22.542551136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:22.542793 containerd[1501]: time="2025-02-13T19:42:22.542630445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:22.542793 containerd[1501]: time="2025-02-13T19:42:22.542652546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:22.542793 containerd[1501]: time="2025-02-13T19:42:22.542759567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:22.543815 containerd[1501]: time="2025-02-13T19:42:22.541670144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:22.543815 containerd[1501]: time="2025-02-13T19:42:22.543772336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:22.543815 containerd[1501]: time="2025-02-13T19:42:22.543793916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:22.544003 containerd[1501]: time="2025-02-13T19:42:22.543897120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:22.594906 kubelet[1803]: E0213 19:42:22.594835 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:22.618407 systemd[1]: Started cri-containerd-962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5.scope - libcontainer container 962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5. Feb 13 19:42:22.622122 systemd[1]: Started cri-containerd-6fa5e7ac9e76ae3da5062a0845c8c5e120f33f7877038c645f33dc65289fb1ef.scope - libcontainer container 6fa5e7ac9e76ae3da5062a0845c8c5e120f33f7877038c645f33dc65289fb1ef. Feb 13 19:42:22.645643 containerd[1501]: time="2025-02-13T19:42:22.645584313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5krpg,Uid:5d101c34-7c09-44b3-b835-f33a284d43df,Namespace:kube-system,Attempt:0,} returns sandbox id \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\"" Feb 13 19:42:22.646792 containerd[1501]: time="2025-02-13T19:42:22.646676672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qgk8k,Uid:1ec8dfc4-0d85-4282-b42a-85ef2f5ee55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fa5e7ac9e76ae3da5062a0845c8c5e120f33f7877038c645f33dc65289fb1ef\"" Feb 13 19:42:22.646843 kubelet[1803]: E0213 19:42:22.646769 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:22.647717 kubelet[1803]: E0213 19:42:22.647591 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:22.648049 containerd[1501]: time="2025-02-13T19:42:22.648022796Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 19:42:23.595415 kubelet[1803]: E0213 19:42:23.595340 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:24.595678 kubelet[1803]: E0213 19:42:24.595637 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:25.595869 kubelet[1803]: E0213 19:42:25.595801 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:26.596582 kubelet[1803]: E0213 19:42:26.596513 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:27.597045 kubelet[1803]: E0213 19:42:27.596966 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:28.598077 kubelet[1803]: E0213 19:42:28.598006 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:29.507688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974420106.mount: Deactivated successfully. Feb 13 19:42:29.599149 kubelet[1803]: E0213 19:42:29.599078 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:30.599818 kubelet[1803]: E0213 19:42:30.599781 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:31.600159 kubelet[1803]: E0213 19:42:31.600093 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:32.102689 containerd[1501]: time="2025-02-13T19:42:32.102622387Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:32.103561 containerd[1501]: time="2025-02-13T19:42:32.103494252Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Feb 13 19:42:32.104920 containerd[1501]: time="2025-02-13T19:42:32.104881042Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:32.106718 containerd[1501]: time="2025-02-13T19:42:32.106677711Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 9.458622925s" Feb 13 19:42:32.106718 containerd[1501]: time="2025-02-13T19:42:32.106711535Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 13 19:42:32.107762 containerd[1501]: time="2025-02-13T19:42:32.107720887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:42:32.109589 containerd[1501]: time="2025-02-13T19:42:32.109538154Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:42:32.128257 containerd[1501]: time="2025-02-13T19:42:32.128193944Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\"" Feb 13 19:42:32.128782 containerd[1501]: time="2025-02-13T19:42:32.128745508Z" level=info msg="StartContainer for \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\"" Feb 13 19:42:32.159434 systemd[1]: Started cri-containerd-058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa.scope - libcontainer container 058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa. Feb 13 19:42:32.202142 systemd[1]: cri-containerd-058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa.scope: Deactivated successfully. Feb 13 19:42:32.265311 containerd[1501]: time="2025-02-13T19:42:32.265261005Z" level=info msg="StartContainer for \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\" returns successfully" Feb 13 19:42:32.601036 kubelet[1803]: E0213 19:42:32.600844 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:32.913330 containerd[1501]: time="2025-02-13T19:42:32.913177558Z" level=info msg="shim disconnected" id=058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa namespace=k8s.io Feb 13 19:42:32.913330 containerd[1501]: time="2025-02-13T19:42:32.913228123Z" level=warning msg="cleaning up after shim disconnected" id=058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa namespace=k8s.io Feb 13 19:42:32.913330 containerd[1501]: time="2025-02-13T19:42:32.913250986Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:33.009788 kubelet[1803]: E0213 19:42:33.009746 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:33.011718 containerd[1501]: time="2025-02-13T19:42:33.011675422Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:42:33.030007 containerd[1501]: time="2025-02-13T19:42:33.029949676Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\"" Feb 13 19:42:33.030449 containerd[1501]: time="2025-02-13T19:42:33.030411482Z" level=info msg="StartContainer for \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\"" Feb 13 19:42:33.059478 systemd[1]: Started cri-containerd-8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0.scope - libcontainer container 8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0. Feb 13 19:42:33.091060 containerd[1501]: time="2025-02-13T19:42:33.090986710Z" level=info msg="StartContainer for \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\" returns successfully" Feb 13 19:42:33.103390 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:42:33.103646 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:33.103723 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:33.113781 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:42:33.114089 systemd[1]: cri-containerd-8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0.scope: Deactivated successfully. Feb 13 19:42:33.123253 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa-rootfs.mount: Deactivated successfully. Feb 13 19:42:33.131942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0-rootfs.mount: Deactivated successfully. Feb 13 19:42:33.137291 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:42:33.138806 containerd[1501]: time="2025-02-13T19:42:33.138465834Z" level=info msg="shim disconnected" id=8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0 namespace=k8s.io Feb 13 19:42:33.138806 containerd[1501]: time="2025-02-13T19:42:33.138536987Z" level=warning msg="cleaning up after shim disconnected" id=8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0 namespace=k8s.io Feb 13 19:42:33.138806 containerd[1501]: time="2025-02-13T19:42:33.138551334Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:33.601229 kubelet[1803]: E0213 19:42:33.601093 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:34.012638 kubelet[1803]: E0213 19:42:34.012489 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:34.014681 containerd[1501]: time="2025-02-13T19:42:34.014629741Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:42:34.027900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1385563535.mount: Deactivated successfully. Feb 13 19:42:34.064513 containerd[1501]: time="2025-02-13T19:42:34.064443802Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\"" Feb 13 19:42:34.064979 containerd[1501]: time="2025-02-13T19:42:34.064938590Z" level=info msg="StartContainer for \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\"" Feb 13 19:42:34.131552 systemd[1]: Started cri-containerd-34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af.scope - libcontainer container 34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af. Feb 13 19:42:34.186938 systemd[1]: cri-containerd-34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af.scope: Deactivated successfully. Feb 13 19:42:34.187742 containerd[1501]: time="2025-02-13T19:42:34.187702853Z" level=info msg="StartContainer for \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\" returns successfully" Feb 13 19:42:34.217667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af-rootfs.mount: Deactivated successfully. Feb 13 19:42:34.491739 containerd[1501]: time="2025-02-13T19:42:34.491586855Z" level=info msg="shim disconnected" id=34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af namespace=k8s.io Feb 13 19:42:34.491739 containerd[1501]: time="2025-02-13T19:42:34.491640466Z" level=warning msg="cleaning up after shim disconnected" id=34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af namespace=k8s.io Feb 13 19:42:34.491739 containerd[1501]: time="2025-02-13T19:42:34.491651887Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:34.534978 containerd[1501]: time="2025-02-13T19:42:34.534891762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:34.535699 containerd[1501]: time="2025-02-13T19:42:34.535646717Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:42:34.536772 containerd[1501]: time="2025-02-13T19:42:34.536745708Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:34.538704 containerd[1501]: time="2025-02-13T19:42:34.538676248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:34.539342 containerd[1501]: time="2025-02-13T19:42:34.539314234Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.431558522s" Feb 13 19:42:34.539391 containerd[1501]: time="2025-02-13T19:42:34.539340884Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:42:34.541165 containerd[1501]: time="2025-02-13T19:42:34.541121563Z" level=info msg="CreateContainer within sandbox \"6fa5e7ac9e76ae3da5062a0845c8c5e120f33f7877038c645f33dc65289fb1ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:42:34.554787 containerd[1501]: time="2025-02-13T19:42:34.554736290Z" level=info msg="CreateContainer within sandbox \"6fa5e7ac9e76ae3da5062a0845c8c5e120f33f7877038c645f33dc65289fb1ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cc86591afd55779825d85a1e1750b5638caacc0044b6510253f2687a7b1fc60c\"" Feb 13 19:42:34.556303 containerd[1501]: time="2025-02-13T19:42:34.555292754Z" level=info msg="StartContainer for \"cc86591afd55779825d85a1e1750b5638caacc0044b6510253f2687a7b1fc60c\"" Feb 13 19:42:34.588409 systemd[1]: Started cri-containerd-cc86591afd55779825d85a1e1750b5638caacc0044b6510253f2687a7b1fc60c.scope - libcontainer container cc86591afd55779825d85a1e1750b5638caacc0044b6510253f2687a7b1fc60c. Feb 13 19:42:34.601982 kubelet[1803]: E0213 19:42:34.601930 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:34.624061 containerd[1501]: time="2025-02-13T19:42:34.623760921Z" level=info msg="StartContainer for \"cc86591afd55779825d85a1e1750b5638caacc0044b6510253f2687a7b1fc60c\" returns successfully" Feb 13 19:42:35.015613 kubelet[1803]: E0213 19:42:35.015578 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:35.017095 kubelet[1803]: E0213 19:42:35.017078 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:35.019178 containerd[1501]: time="2025-02-13T19:42:35.019133325Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:42:35.036184 containerd[1501]: time="2025-02-13T19:42:35.036126116Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\"" Feb 13 19:42:35.036749 containerd[1501]: time="2025-02-13T19:42:35.036721062Z" level=info msg="StartContainer for \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\"" Feb 13 19:42:35.043294 kubelet[1803]: I0213 19:42:35.042406 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qgk8k" podStartSLOduration=4.150256273 podStartE2EDuration="16.042391163s" podCreationTimestamp="2025-02-13 19:42:19 +0000 UTC" firstStartedPulling="2025-02-13 19:42:22.647857646 +0000 UTC m=+4.476838712" lastFinishedPulling="2025-02-13 19:42:34.539992536 +0000 UTC m=+16.368973602" observedRunningTime="2025-02-13 19:42:35.026147847 +0000 UTC m=+16.855128913" watchObservedRunningTime="2025-02-13 19:42:35.042391163 +0000 UTC m=+16.871372229" Feb 13 19:42:35.064505 systemd[1]: Started cri-containerd-ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95.scope - libcontainer container ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95. Feb 13 19:42:35.089231 systemd[1]: cri-containerd-ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95.scope: Deactivated successfully. Feb 13 19:42:35.093121 containerd[1501]: time="2025-02-13T19:42:35.093082459Z" level=info msg="StartContainer for \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\" returns successfully" Feb 13 19:42:35.219424 containerd[1501]: time="2025-02-13T19:42:35.219352836Z" level=info msg="shim disconnected" id=ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95 namespace=k8s.io Feb 13 19:42:35.219424 containerd[1501]: time="2025-02-13T19:42:35.219412909Z" level=warning msg="cleaning up after shim disconnected" id=ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95 namespace=k8s.io Feb 13 19:42:35.219424 containerd[1501]: time="2025-02-13T19:42:35.219422627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:35.602945 kubelet[1803]: E0213 19:42:35.602886 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:36.021337 kubelet[1803]: E0213 19:42:36.021168 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.021337 kubelet[1803]: E0213 19:42:36.021286 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.023102 containerd[1501]: time="2025-02-13T19:42:36.023059145Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:42:36.044343 containerd[1501]: time="2025-02-13T19:42:36.044291666Z" level=info msg="CreateContainer within sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\"" Feb 13 19:42:36.044919 containerd[1501]: time="2025-02-13T19:42:36.044875822Z" level=info msg="StartContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\"" Feb 13 19:42:36.082568 systemd[1]: Started cri-containerd-c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301.scope - libcontainer container c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301. Feb 13 19:42:36.114114 containerd[1501]: time="2025-02-13T19:42:36.114067095Z" level=info msg="StartContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" returns successfully" Feb 13 19:42:36.303650 kubelet[1803]: I0213 19:42:36.303488 1803 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:42:36.604053 kubelet[1803]: E0213 19:42:36.603940 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:36.653390 kernel: Initializing XFRM netlink socket Feb 13 19:42:37.025710 kubelet[1803]: E0213 19:42:37.025573 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:37.151009 kubelet[1803]: I0213 19:42:37.150926 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5krpg" podStartSLOduration=8.69102823 podStartE2EDuration="18.150906399s" podCreationTimestamp="2025-02-13 19:42:19 +0000 UTC" firstStartedPulling="2025-02-13 19:42:22.647668792 +0000 UTC m=+4.476649858" lastFinishedPulling="2025-02-13 19:42:32.107546961 +0000 UTC m=+13.936528027" observedRunningTime="2025-02-13 19:42:37.150569667 +0000 UTC m=+18.979550733" watchObservedRunningTime="2025-02-13 19:42:37.150906399 +0000 UTC m=+18.979887465" Feb 13 19:42:37.604545 kubelet[1803]: E0213 19:42:37.604464 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:38.027413 kubelet[1803]: E0213 19:42:38.027233 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:38.382519 systemd-networkd[1417]: cilium_host: Link UP Feb 13 19:42:38.382794 systemd-networkd[1417]: cilium_net: Link UP Feb 13 19:42:38.383067 systemd-networkd[1417]: cilium_net: Gained carrier Feb 13 19:42:38.383379 systemd-networkd[1417]: cilium_host: Gained carrier Feb 13 19:42:38.500637 systemd-networkd[1417]: cilium_vxlan: Link UP Feb 13 19:42:38.500652 systemd-networkd[1417]: cilium_vxlan: Gained carrier Feb 13 19:42:38.517467 systemd-networkd[1417]: cilium_net: Gained IPv6LL Feb 13 19:42:38.591269 kubelet[1803]: E0213 19:42:38.591179 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:38.604840 kubelet[1803]: E0213 19:42:38.604778 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:38.723275 kernel: NET: Registered PF_ALG protocol family Feb 13 19:42:39.028412 kubelet[1803]: E0213 19:42:39.028375 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:39.254443 systemd-networkd[1417]: cilium_host: Gained IPv6LL Feb 13 19:42:39.415030 systemd-networkd[1417]: lxc_health: Link UP Feb 13 19:42:39.428804 systemd-networkd[1417]: lxc_health: Gained carrier Feb 13 19:42:39.605123 kubelet[1803]: E0213 19:42:39.605019 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:39.637426 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Feb 13 19:42:40.606013 kubelet[1803]: E0213 19:42:40.605938 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:40.755596 kubelet[1803]: I0213 19:42:40.755555 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6mwf\" (UniqueName: \"kubernetes.io/projected/6c294a9e-9e35-49a7-8615-4bcad24f887e-kube-api-access-h6mwf\") pod \"nginx-deployment-7fcdb87857-bmtct\" (UID: \"6c294a9e-9e35-49a7-8615-4bcad24f887e\") " pod="default/nginx-deployment-7fcdb87857-bmtct" Feb 13 19:42:40.758053 systemd[1]: Created slice kubepods-besteffort-pod6c294a9e_9e35_49a7_8615_4bcad24f887e.slice - libcontainer container kubepods-besteffort-pod6c294a9e_9e35_49a7_8615_4bcad24f887e.slice. Feb 13 19:42:40.948127 kubelet[1803]: E0213 19:42:40.947439 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:41.035269 kubelet[1803]: E0213 19:42:41.032748 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:41.061339 containerd[1501]: time="2025-02-13T19:42:41.061282723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bmtct,Uid:6c294a9e-9e35-49a7-8615-4bcad24f887e,Namespace:default,Attempt:0,}" Feb 13 19:42:41.118564 systemd-networkd[1417]: lxcf4c2523cea0b: Link UP Feb 13 19:42:41.131267 kernel: eth0: renamed from tmpb03ec Feb 13 19:42:41.140583 systemd-networkd[1417]: lxcf4c2523cea0b: Gained carrier Feb 13 19:42:41.174406 systemd-networkd[1417]: lxc_health: Gained IPv6LL Feb 13 19:42:41.606967 kubelet[1803]: E0213 19:42:41.606899 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:42.033965 kubelet[1803]: E0213 19:42:42.033919 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:42.325559 systemd-networkd[1417]: lxcf4c2523cea0b: Gained IPv6LL Feb 13 19:42:42.607594 kubelet[1803]: E0213 19:42:42.607369 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:43.608173 kubelet[1803]: E0213 19:42:43.608048 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:44.248768 containerd[1501]: time="2025-02-13T19:42:44.248304751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:44.248768 containerd[1501]: time="2025-02-13T19:42:44.248449098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:44.248768 containerd[1501]: time="2025-02-13T19:42:44.248465369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:44.249339 containerd[1501]: time="2025-02-13T19:42:44.248842253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:44.285560 systemd[1]: Started cri-containerd-b03ec0a5a2499e9d1a4f1034e6bc298bb5213b5f5ae81505f4a8f97a19ba08e2.scope - libcontainer container b03ec0a5a2499e9d1a4f1034e6bc298bb5213b5f5ae81505f4a8f97a19ba08e2. Feb 13 19:42:44.301157 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:44.330916 containerd[1501]: time="2025-02-13T19:42:44.330844182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-bmtct,Uid:6c294a9e-9e35-49a7-8615-4bcad24f887e,Namespace:default,Attempt:0,} returns sandbox id \"b03ec0a5a2499e9d1a4f1034e6bc298bb5213b5f5ae81505f4a8f97a19ba08e2\"" Feb 13 19:42:44.332485 containerd[1501]: time="2025-02-13T19:42:44.332412075Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:42:44.609316 kubelet[1803]: E0213 19:42:44.609127 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:45.610041 kubelet[1803]: E0213 19:42:45.609969 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:46.611060 kubelet[1803]: E0213 19:42:46.610949 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:47.612086 kubelet[1803]: E0213 19:42:47.612034 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:48.612552 kubelet[1803]: E0213 19:42:48.612465 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:49.612859 kubelet[1803]: E0213 19:42:49.612813 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:50.466349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492885747.mount: Deactivated successfully. Feb 13 19:42:50.614461 kubelet[1803]: E0213 19:42:50.614353 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:51.615557 kubelet[1803]: E0213 19:42:51.615495 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:52.615876 kubelet[1803]: E0213 19:42:52.615781 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:53.616910 kubelet[1803]: E0213 19:42:53.616824 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:54.617115 kubelet[1803]: E0213 19:42:54.617034 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:54.831286 update_engine[1489]: I20250213 19:42:54.831158 1489 update_attempter.cc:509] Updating boot flags... Feb 13 19:42:54.936367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2943) Feb 13 19:42:55.053289 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2947) Feb 13 19:42:55.087368 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2947) Feb 13 19:42:55.591202 containerd[1501]: time="2025-02-13T19:42:55.591092438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:55.604324 containerd[1501]: time="2025-02-13T19:42:55.604177600Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:42:55.617776 kubelet[1803]: E0213 19:42:55.617686 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:55.662647 containerd[1501]: time="2025-02-13T19:42:55.662540505Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:55.717558 containerd[1501]: time="2025-02-13T19:42:55.717485203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:55.718507 containerd[1501]: time="2025-02-13T19:42:55.718439324Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 11.38598637s" Feb 13 19:42:55.718507 containerd[1501]: time="2025-02-13T19:42:55.718490330Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:42:55.720961 containerd[1501]: time="2025-02-13T19:42:55.720920852Z" level=info msg="CreateContainer within sandbox \"b03ec0a5a2499e9d1a4f1034e6bc298bb5213b5f5ae81505f4a8f97a19ba08e2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:42:56.310759 containerd[1501]: time="2025-02-13T19:42:56.310683679Z" level=info msg="CreateContainer within sandbox \"b03ec0a5a2499e9d1a4f1034e6bc298bb5213b5f5ae81505f4a8f97a19ba08e2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1fd365777790c4afa60b834fc7db60ed1bc027a7d2214691f7457792ae81333a\"" Feb 13 19:42:56.311320 containerd[1501]: time="2025-02-13T19:42:56.311289749Z" level=info msg="StartContainer for \"1fd365777790c4afa60b834fc7db60ed1bc027a7d2214691f7457792ae81333a\"" Feb 13 19:42:56.353397 systemd[1]: Started cri-containerd-1fd365777790c4afa60b834fc7db60ed1bc027a7d2214691f7457792ae81333a.scope - libcontainer container 1fd365777790c4afa60b834fc7db60ed1bc027a7d2214691f7457792ae81333a. Feb 13 19:42:56.618104 kubelet[1803]: E0213 19:42:56.617904 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:56.626331 containerd[1501]: time="2025-02-13T19:42:56.626139993Z" level=info msg="StartContainer for \"1fd365777790c4afa60b834fc7db60ed1bc027a7d2214691f7457792ae81333a\" returns successfully" Feb 13 19:42:57.117976 kubelet[1803]: I0213 19:42:57.117911 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-bmtct" podStartSLOduration=5.730212158 podStartE2EDuration="17.117897112s" podCreationTimestamp="2025-02-13 19:42:40 +0000 UTC" firstStartedPulling="2025-02-13 19:42:44.332017517 +0000 UTC m=+26.160998583" lastFinishedPulling="2025-02-13 19:42:55.719702471 +0000 UTC m=+37.548683537" observedRunningTime="2025-02-13 19:42:57.117688547 +0000 UTC m=+38.946669613" watchObservedRunningTime="2025-02-13 19:42:57.117897112 +0000 UTC m=+38.946878178" Feb 13 19:42:57.618192 kubelet[1803]: E0213 19:42:57.618092 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:58.590819 kubelet[1803]: E0213 19:42:58.590756 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:58.618650 kubelet[1803]: E0213 19:42:58.618586 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:42:59.619060 kubelet[1803]: E0213 19:42:59.618979 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:00.619600 kubelet[1803]: E0213 19:43:00.619516 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:01.620319 kubelet[1803]: E0213 19:43:01.620190 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:02.621182 kubelet[1803]: E0213 19:43:02.621064 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:03.621959 kubelet[1803]: E0213 19:43:03.621871 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:04.319874 systemd[1]: Created slice kubepods-besteffort-pod592f9068_76bc_4d0c_a7db_adae78327bb1.slice - libcontainer container kubepods-besteffort-pod592f9068_76bc_4d0c_a7db_adae78327bb1.slice. Feb 13 19:43:04.374199 kubelet[1803]: I0213 19:43:04.374141 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6kj\" (UniqueName: \"kubernetes.io/projected/592f9068-76bc-4d0c-a7db-adae78327bb1-kube-api-access-mx6kj\") pod \"nfs-server-provisioner-0\" (UID: \"592f9068-76bc-4d0c-a7db-adae78327bb1\") " pod="default/nfs-server-provisioner-0" Feb 13 19:43:04.374199 kubelet[1803]: I0213 19:43:04.374192 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/592f9068-76bc-4d0c-a7db-adae78327bb1-data\") pod \"nfs-server-provisioner-0\" (UID: \"592f9068-76bc-4d0c-a7db-adae78327bb1\") " pod="default/nfs-server-provisioner-0" Feb 13 19:43:04.622481 kubelet[1803]: E0213 19:43:04.622301 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:04.622989 containerd[1501]: time="2025-02-13T19:43:04.622945134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:592f9068-76bc-4d0c-a7db-adae78327bb1,Namespace:default,Attempt:0,}" Feb 13 19:43:04.882499 systemd-networkd[1417]: lxc723e67215adf: Link UP Feb 13 19:43:04.891271 kernel: eth0: renamed from tmp3feb8 Feb 13 19:43:04.898319 systemd-networkd[1417]: lxc723e67215adf: Gained carrier Feb 13 19:43:05.173562 containerd[1501]: time="2025-02-13T19:43:05.173280945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:05.174013 containerd[1501]: time="2025-02-13T19:43:05.173374291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:05.174013 containerd[1501]: time="2025-02-13T19:43:05.173980956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:05.174207 containerd[1501]: time="2025-02-13T19:43:05.174143924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:05.203552 systemd[1]: Started cri-containerd-3feb8deaf225f8749ef2810991f7d0290907fbed5d655b89dc06d59395fddde9.scope - libcontainer container 3feb8deaf225f8749ef2810991f7d0290907fbed5d655b89dc06d59395fddde9. Feb 13 19:43:05.221704 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:43:05.245469 containerd[1501]: time="2025-02-13T19:43:05.245422390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:592f9068-76bc-4d0c-a7db-adae78327bb1,Namespace:default,Attempt:0,} returns sandbox id \"3feb8deaf225f8749ef2810991f7d0290907fbed5d655b89dc06d59395fddde9\"" Feb 13 19:43:05.246926 containerd[1501]: time="2025-02-13T19:43:05.246897723Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:43:05.623212 kubelet[1803]: E0213 19:43:05.623132 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:06.623575 kubelet[1803]: E0213 19:43:06.623503 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:06.965415 systemd-networkd[1417]: lxc723e67215adf: Gained IPv6LL Feb 13 19:43:07.624506 kubelet[1803]: E0213 19:43:07.624431 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:08.625462 kubelet[1803]: E0213 19:43:08.625390 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:08.652348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3668425438.mount: Deactivated successfully. Feb 13 19:43:09.625588 kubelet[1803]: E0213 19:43:09.625513 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:10.625876 kubelet[1803]: E0213 19:43:10.625794 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:11.626191 kubelet[1803]: E0213 19:43:11.626134 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:11.647421 containerd[1501]: time="2025-02-13T19:43:11.647321381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:11.648468 containerd[1501]: time="2025-02-13T19:43:11.648410211Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:43:11.650280 containerd[1501]: time="2025-02-13T19:43:11.650224367Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:11.653154 containerd[1501]: time="2025-02-13T19:43:11.653118157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:11.654175 containerd[1501]: time="2025-02-13T19:43:11.654105957Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.407174921s" Feb 13 19:43:11.654175 containerd[1501]: time="2025-02-13T19:43:11.654164957Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:43:11.657017 containerd[1501]: time="2025-02-13T19:43:11.656970040Z" level=info msg="CreateContainer within sandbox \"3feb8deaf225f8749ef2810991f7d0290907fbed5d655b89dc06d59395fddde9\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:43:11.673552 containerd[1501]: time="2025-02-13T19:43:11.673493495Z" level=info msg="CreateContainer within sandbox \"3feb8deaf225f8749ef2810991f7d0290907fbed5d655b89dc06d59395fddde9\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d815ae2afec6fd7ab4198cc7063ea76a765ec1236c834a3cdf8280a86d98b884\"" Feb 13 19:43:11.674102 containerd[1501]: time="2025-02-13T19:43:11.674060483Z" level=info msg="StartContainer for \"d815ae2afec6fd7ab4198cc7063ea76a765ec1236c834a3cdf8280a86d98b884\"" Feb 13 19:43:11.755496 systemd[1]: Started cri-containerd-d815ae2afec6fd7ab4198cc7063ea76a765ec1236c834a3cdf8280a86d98b884.scope - libcontainer container d815ae2afec6fd7ab4198cc7063ea76a765ec1236c834a3cdf8280a86d98b884. Feb 13 19:43:11.912335 containerd[1501]: time="2025-02-13T19:43:11.912181453Z" level=info msg="StartContainer for \"d815ae2afec6fd7ab4198cc7063ea76a765ec1236c834a3cdf8280a86d98b884\" returns successfully" Feb 13 19:43:12.123187 kubelet[1803]: I0213 19:43:12.123099 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.714236129 podStartE2EDuration="8.123080575s" podCreationTimestamp="2025-02-13 19:43:04 +0000 UTC" firstStartedPulling="2025-02-13 19:43:05.246469155 +0000 UTC m=+47.075450221" lastFinishedPulling="2025-02-13 19:43:11.655313601 +0000 UTC m=+53.484294667" observedRunningTime="2025-02-13 19:43:12.122912409 +0000 UTC m=+53.951893475" watchObservedRunningTime="2025-02-13 19:43:12.123080575 +0000 UTC m=+53.952061641" Feb 13 19:43:12.627358 kubelet[1803]: E0213 19:43:12.627209 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:13.628352 kubelet[1803]: E0213 19:43:13.628228 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:14.629484 kubelet[1803]: E0213 19:43:14.629394 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:15.630211 kubelet[1803]: E0213 19:43:15.630126 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:16.631181 kubelet[1803]: E0213 19:43:16.631064 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:17.631792 kubelet[1803]: E0213 19:43:17.631719 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:18.591347 kubelet[1803]: E0213 19:43:18.591221 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:18.632062 kubelet[1803]: E0213 19:43:18.631976 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:19.632968 kubelet[1803]: E0213 19:43:19.632877 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:20.633794 kubelet[1803]: E0213 19:43:20.633680 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:21.634887 kubelet[1803]: E0213 19:43:21.634747 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:21.744056 systemd[1]: Created slice kubepods-besteffort-pod2e758250_31ef_4033_a3b9_63b987f03ccc.slice - libcontainer container kubepods-besteffort-pod2e758250_31ef_4033_a3b9_63b987f03ccc.slice. Feb 13 19:43:21.872107 kubelet[1803]: I0213 19:43:21.872010 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cdc3efed-dc57-48d2-a9a2-8478526eaf8f\" (UniqueName: \"kubernetes.io/nfs/2e758250-31ef-4033-a3b9-63b987f03ccc-pvc-cdc3efed-dc57-48d2-a9a2-8478526eaf8f\") pod \"test-pod-1\" (UID: \"2e758250-31ef-4033-a3b9-63b987f03ccc\") " pod="default/test-pod-1" Feb 13 19:43:21.872107 kubelet[1803]: I0213 19:43:21.872083 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq7cd\" (UniqueName: \"kubernetes.io/projected/2e758250-31ef-4033-a3b9-63b987f03ccc-kube-api-access-hq7cd\") pod \"test-pod-1\" (UID: \"2e758250-31ef-4033-a3b9-63b987f03ccc\") " pod="default/test-pod-1" Feb 13 19:43:22.009304 kernel: FS-Cache: Loaded Feb 13 19:43:22.098853 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:43:22.099085 kernel: RPC: Registered udp transport module. Feb 13 19:43:22.099114 kernel: RPC: Registered tcp transport module. Feb 13 19:43:22.099138 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:43:22.099638 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:43:22.444667 kernel: NFS: Registering the id_resolver key type Feb 13 19:43:22.444865 kernel: Key type id_resolver registered Feb 13 19:43:22.444889 kernel: Key type id_legacy registered Feb 13 19:43:22.483211 nfsidmap[3212]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:43:22.490459 nfsidmap[3215]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:43:22.635992 kubelet[1803]: E0213 19:43:22.635930 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:22.648140 containerd[1501]: time="2025-02-13T19:43:22.648060060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e758250-31ef-4033-a3b9-63b987f03ccc,Namespace:default,Attempt:0,}" Feb 13 19:43:22.679657 systemd-networkd[1417]: lxc2a8430882d14: Link UP Feb 13 19:43:22.693274 kernel: eth0: renamed from tmp8591c Feb 13 19:43:22.704091 systemd-networkd[1417]: lxc2a8430882d14: Gained carrier Feb 13 19:43:22.951164 containerd[1501]: time="2025-02-13T19:43:22.950989546Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:22.951373 containerd[1501]: time="2025-02-13T19:43:22.951174082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:22.951373 containerd[1501]: time="2025-02-13T19:43:22.951211983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:22.952177 containerd[1501]: time="2025-02-13T19:43:22.952083932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:22.972579 systemd[1]: Started cri-containerd-8591c040e9c7fe041c2398f8508dc6524751b9f8ef634f37096ee22706f0483c.scope - libcontainer container 8591c040e9c7fe041c2398f8508dc6524751b9f8ef634f37096ee22706f0483c. Feb 13 19:43:22.991880 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:43:23.019183 containerd[1501]: time="2025-02-13T19:43:23.019125226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e758250-31ef-4033-a3b9-63b987f03ccc,Namespace:default,Attempt:0,} returns sandbox id \"8591c040e9c7fe041c2398f8508dc6524751b9f8ef634f37096ee22706f0483c\"" Feb 13 19:43:23.020865 containerd[1501]: time="2025-02-13T19:43:23.020548921Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:43:23.637172 kubelet[1803]: E0213 19:43:23.636937 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:23.637800 containerd[1501]: time="2025-02-13T19:43:23.637626000Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:23.663777 containerd[1501]: time="2025-02-13T19:43:23.663609876Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:43:23.667113 containerd[1501]: time="2025-02-13T19:43:23.667026685Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 646.441836ms" Feb 13 19:43:23.667113 containerd[1501]: time="2025-02-13T19:43:23.667080005Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:43:23.669806 containerd[1501]: time="2025-02-13T19:43:23.669745693Z" level=info msg="CreateContainer within sandbox \"8591c040e9c7fe041c2398f8508dc6524751b9f8ef634f37096ee22706f0483c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:43:23.874199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1372695728.mount: Deactivated successfully. Feb 13 19:43:23.887412 containerd[1501]: time="2025-02-13T19:43:23.887168091Z" level=info msg="CreateContainer within sandbox \"8591c040e9c7fe041c2398f8508dc6524751b9f8ef634f37096ee22706f0483c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"f90be1c8f9bde0d5316df2bbe907896d798d1ab86601382e3c387211c0fa53a0\"" Feb 13 19:43:23.888111 containerd[1501]: time="2025-02-13T19:43:23.887901869Z" level=info msg="StartContainer for \"f90be1c8f9bde0d5316df2bbe907896d798d1ab86601382e3c387211c0fa53a0\"" Feb 13 19:43:23.932583 systemd[1]: Started cri-containerd-f90be1c8f9bde0d5316df2bbe907896d798d1ab86601382e3c387211c0fa53a0.scope - libcontainer container f90be1c8f9bde0d5316df2bbe907896d798d1ab86601382e3c387211c0fa53a0. Feb 13 19:43:23.968995 containerd[1501]: time="2025-02-13T19:43:23.968910462Z" level=info msg="StartContainer for \"f90be1c8f9bde0d5316df2bbe907896d798d1ab86601382e3c387211c0fa53a0\" returns successfully" Feb 13 19:43:24.153538 kubelet[1803]: I0213 19:43:24.153307 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.505347629 podStartE2EDuration="20.153283543s" podCreationTimestamp="2025-02-13 19:43:04 +0000 UTC" firstStartedPulling="2025-02-13 19:43:23.020088936 +0000 UTC m=+64.849070003" lastFinishedPulling="2025-02-13 19:43:23.668024851 +0000 UTC m=+65.497005917" observedRunningTime="2025-02-13 19:43:24.153156213 +0000 UTC m=+65.982137290" watchObservedRunningTime="2025-02-13 19:43:24.153283543 +0000 UTC m=+65.982264609" Feb 13 19:43:24.437581 systemd-networkd[1417]: lxc2a8430882d14: Gained IPv6LL Feb 13 19:43:24.637921 kubelet[1803]: E0213 19:43:24.637840 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:25.638989 kubelet[1803]: E0213 19:43:25.638710 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:26.639674 kubelet[1803]: E0213 19:43:26.639596 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:27.354295 systemd[1]: run-containerd-runc-k8s.io-c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301-runc.MEH968.mount: Deactivated successfully. Feb 13 19:43:27.370579 containerd[1501]: time="2025-02-13T19:43:27.370513651Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:43:27.378897 containerd[1501]: time="2025-02-13T19:43:27.378856949Z" level=info msg="StopContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" with timeout 2 (s)" Feb 13 19:43:27.379132 containerd[1501]: time="2025-02-13T19:43:27.379110024Z" level=info msg="Stop container \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" with signal terminated" Feb 13 19:43:27.386809 systemd-networkd[1417]: lxc_health: Link DOWN Feb 13 19:43:27.386825 systemd-networkd[1417]: lxc_health: Lost carrier Feb 13 19:43:27.410860 systemd[1]: cri-containerd-c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301.scope: Deactivated successfully. Feb 13 19:43:27.411341 systemd[1]: cri-containerd-c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301.scope: Consumed 7.937s CPU time. Feb 13 19:43:27.435182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301-rootfs.mount: Deactivated successfully. Feb 13 19:43:27.490464 containerd[1501]: time="2025-02-13T19:43:27.490358526Z" level=info msg="shim disconnected" id=c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301 namespace=k8s.io Feb 13 19:43:27.490464 containerd[1501]: time="2025-02-13T19:43:27.490437144Z" level=warning msg="cleaning up after shim disconnected" id=c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301 namespace=k8s.io Feb 13 19:43:27.490464 containerd[1501]: time="2025-02-13T19:43:27.490448586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:27.509455 containerd[1501]: time="2025-02-13T19:43:27.509393732Z" level=info msg="StopContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" returns successfully" Feb 13 19:43:27.510020 containerd[1501]: time="2025-02-13T19:43:27.509988027Z" level=info msg="StopPodSandbox for \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\"" Feb 13 19:43:27.510080 containerd[1501]: time="2025-02-13T19:43:27.510031940Z" level=info msg="Container to stop \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:27.510080 containerd[1501]: time="2025-02-13T19:43:27.510064491Z" level=info msg="Container to stop \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:27.510080 containerd[1501]: time="2025-02-13T19:43:27.510072226Z" level=info msg="Container to stop \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:27.510155 containerd[1501]: time="2025-02-13T19:43:27.510080091Z" level=info msg="Container to stop \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:27.510155 containerd[1501]: time="2025-02-13T19:43:27.510087985Z" level=info msg="Container to stop \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 19:43:27.512123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5-shm.mount: Deactivated successfully. Feb 13 19:43:27.517390 systemd[1]: cri-containerd-962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5.scope: Deactivated successfully. Feb 13 19:43:27.539955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5-rootfs.mount: Deactivated successfully. Feb 13 19:43:27.547387 containerd[1501]: time="2025-02-13T19:43:27.547321625Z" level=info msg="shim disconnected" id=962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5 namespace=k8s.io Feb 13 19:43:27.547387 containerd[1501]: time="2025-02-13T19:43:27.547378512Z" level=warning msg="cleaning up after shim disconnected" id=962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5 namespace=k8s.io Feb 13 19:43:27.547387 containerd[1501]: time="2025-02-13T19:43:27.547386828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:27.563874 containerd[1501]: time="2025-02-13T19:43:27.562900841Z" level=info msg="TearDown network for sandbox \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" successfully" Feb 13 19:43:27.563874 containerd[1501]: time="2025-02-13T19:43:27.562947328Z" level=info msg="StopPodSandbox for \"962441a5c3c0fd709dd7a95e03565c7f0afb64f01757b62dcfeaaff5a91e93b5\" returns successfully" Feb 13 19:43:27.639914 kubelet[1803]: E0213 19:43:27.639746 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:27.711117 kubelet[1803]: I0213 19:43:27.711047 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-run\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711117 kubelet[1803]: I0213 19:43:27.711117 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-config-path\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711117 kubelet[1803]: I0213 19:43:27.711137 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-etc-cni-netd\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711154 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpc62\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-kube-api-access-qpc62\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711168 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-kernel\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711189 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-hubble-tls\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711186 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711211 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-bpf-maps\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711458 kubelet[1803]: I0213 19:43:27.711304 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711325 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cni-path\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711349 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-net\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711364 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-cgroup\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711385 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d101c34-7c09-44b3-b835-f33a284d43df-clustermesh-secrets\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711398 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-lib-modules\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711672 kubelet[1803]: I0213 19:43:27.711412 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-hostproc\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711430 1803 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-xtables-lock\") pod \"5d101c34-7c09-44b3-b835-f33a284d43df\" (UID: \"5d101c34-7c09-44b3-b835-f33a284d43df\") " Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711465 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-run\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711476 1803 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-bpf-maps\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711468 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711510 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cni-path" (OuterVolumeSpecName: "cni-path") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.711826 kubelet[1803]: I0213 19:43:27.711497 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.712050 kubelet[1803]: I0213 19:43:27.711532 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.712050 kubelet[1803]: I0213 19:43:27.711549 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.712050 kubelet[1803]: I0213 19:43:27.711574 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-hostproc" (OuterVolumeSpecName: "hostproc") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.712050 kubelet[1803]: I0213 19:43:27.711590 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.715129 kubelet[1803]: I0213 19:43:27.712901 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 13 19:43:27.715129 kubelet[1803]: I0213 19:43:27.715050 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-kube-api-access-qpc62" (OuterVolumeSpecName: "kube-api-access-qpc62") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "kube-api-access-qpc62". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:43:27.715129 kubelet[1803]: I0213 19:43:27.715050 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d101c34-7c09-44b3-b835-f33a284d43df-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 13 19:43:27.715575 kubelet[1803]: I0213 19:43:27.715533 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 13 19:43:27.716476 kubelet[1803]: I0213 19:43:27.716428 1803 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5d101c34-7c09-44b3-b835-f33a284d43df" (UID: "5d101c34-7c09-44b3-b835-f33a284d43df"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 13 19:43:27.812114 kubelet[1803]: I0213 19:43:27.812041 1803 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-xtables-lock\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812114 kubelet[1803]: I0213 19:43:27.812101 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-config-path\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812114 kubelet[1803]: I0213 19:43:27.812116 1803 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-etc-cni-netd\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812114 kubelet[1803]: I0213 19:43:27.812128 1803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qpc62\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-kube-api-access-qpc62\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812114 kubelet[1803]: I0213 19:43:27.812155 1803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-kernel\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812167 1803 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5d101c34-7c09-44b3-b835-f33a284d43df-hubble-tls\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812179 1803 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cni-path\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812191 1803 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-host-proc-sys-net\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812202 1803 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-cilium-cgroup\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812213 1803 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5d101c34-7c09-44b3-b835-f33a284d43df-clustermesh-secrets\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812224 1803 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-lib-modules\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:27.812445 kubelet[1803]: I0213 19:43:27.812277 1803 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5d101c34-7c09-44b3-b835-f33a284d43df-hostproc\") on node \"10.0.0.27\" DevicePath \"\"" Feb 13 19:43:28.150506 kubelet[1803]: I0213 19:43:28.150465 1803 scope.go:117] "RemoveContainer" containerID="c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301" Feb 13 19:43:28.151717 containerd[1501]: time="2025-02-13T19:43:28.151675382Z" level=info msg="RemoveContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\"" Feb 13 19:43:28.157098 systemd[1]: Removed slice kubepods-burstable-pod5d101c34_7c09_44b3_b835_f33a284d43df.slice - libcontainer container kubepods-burstable-pod5d101c34_7c09_44b3_b835_f33a284d43df.slice. Feb 13 19:43:28.157217 systemd[1]: kubepods-burstable-pod5d101c34_7c09_44b3_b835_f33a284d43df.slice: Consumed 8.052s CPU time. Feb 13 19:43:28.158391 containerd[1501]: time="2025-02-13T19:43:28.158355145Z" level=info msg="RemoveContainer for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" returns successfully" Feb 13 19:43:28.158712 kubelet[1803]: I0213 19:43:28.158662 1803 scope.go:117] "RemoveContainer" containerID="ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95" Feb 13 19:43:28.159955 containerd[1501]: time="2025-02-13T19:43:28.159912569Z" level=info msg="RemoveContainer for \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\"" Feb 13 19:43:28.163633 containerd[1501]: time="2025-02-13T19:43:28.163596366Z" level=info msg="RemoveContainer for \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\" returns successfully" Feb 13 19:43:28.163829 kubelet[1803]: I0213 19:43:28.163782 1803 scope.go:117] "RemoveContainer" containerID="34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af" Feb 13 19:43:28.165348 containerd[1501]: time="2025-02-13T19:43:28.165062249Z" level=info msg="RemoveContainer for \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\"" Feb 13 19:43:28.169573 containerd[1501]: time="2025-02-13T19:43:28.169452602Z" level=info msg="RemoveContainer for \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\" returns successfully" Feb 13 19:43:28.169803 kubelet[1803]: I0213 19:43:28.169741 1803 scope.go:117] "RemoveContainer" containerID="8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0" Feb 13 19:43:28.172070 containerd[1501]: time="2025-02-13T19:43:28.172024422Z" level=info msg="RemoveContainer for \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\"" Feb 13 19:43:28.176101 containerd[1501]: time="2025-02-13T19:43:28.176068435Z" level=info msg="RemoveContainer for \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\" returns successfully" Feb 13 19:43:28.176413 kubelet[1803]: I0213 19:43:28.176354 1803 scope.go:117] "RemoveContainer" containerID="058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa" Feb 13 19:43:28.177776 containerd[1501]: time="2025-02-13T19:43:28.177723202Z" level=info msg="RemoveContainer for \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\"" Feb 13 19:43:28.181804 containerd[1501]: time="2025-02-13T19:43:28.181752468Z" level=info msg="RemoveContainer for \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\" returns successfully" Feb 13 19:43:28.182049 kubelet[1803]: I0213 19:43:28.182014 1803 scope.go:117] "RemoveContainer" containerID="c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301" Feb 13 19:43:28.182389 containerd[1501]: time="2025-02-13T19:43:28.182320795Z" level=error msg="ContainerStatus for \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\": not found" Feb 13 19:43:28.182577 kubelet[1803]: E0213 19:43:28.182525 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\": not found" containerID="c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301" Feb 13 19:43:28.182655 kubelet[1803]: I0213 19:43:28.182587 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301"} err="failed to get container status \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\": rpc error: code = NotFound desc = an error occurred when try to find container \"c04e1f1b1baccf643cc7b58f3c26d679e62d8258ba8f2811970f048370f73301\": not found" Feb 13 19:43:28.182655 kubelet[1803]: I0213 19:43:28.182653 1803 scope.go:117] "RemoveContainer" containerID="ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95" Feb 13 19:43:28.182928 containerd[1501]: time="2025-02-13T19:43:28.182881187Z" level=error msg="ContainerStatus for \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\": not found" Feb 13 19:43:28.183081 kubelet[1803]: E0213 19:43:28.183045 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\": not found" containerID="ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95" Feb 13 19:43:28.183148 kubelet[1803]: I0213 19:43:28.183088 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95"} err="failed to get container status \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\": rpc error: code = NotFound desc = an error occurred when try to find container \"ee96a3aa8b77c6a7e5a93e5c825eb582865422bf84754e524963baf6f9f5dd95\": not found" Feb 13 19:43:28.183148 kubelet[1803]: I0213 19:43:28.183117 1803 scope.go:117] "RemoveContainer" containerID="34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af" Feb 13 19:43:28.183388 containerd[1501]: time="2025-02-13T19:43:28.183322445Z" level=error msg="ContainerStatus for \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\": not found" Feb 13 19:43:28.183497 kubelet[1803]: E0213 19:43:28.183470 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\": not found" containerID="34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af" Feb 13 19:43:28.183540 kubelet[1803]: I0213 19:43:28.183507 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af"} err="failed to get container status \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\": rpc error: code = NotFound desc = an error occurred when try to find container \"34eeb12ab7b127789acf72cef257dc130271b6b1f7eb0da50b33208579e576af\": not found" Feb 13 19:43:28.183577 kubelet[1803]: I0213 19:43:28.183536 1803 scope.go:117] "RemoveContainer" containerID="8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0" Feb 13 19:43:28.183797 containerd[1501]: time="2025-02-13T19:43:28.183760839Z" level=error msg="ContainerStatus for \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\": not found" Feb 13 19:43:28.183939 kubelet[1803]: E0213 19:43:28.183909 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\": not found" containerID="8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0" Feb 13 19:43:28.183996 kubelet[1803]: I0213 19:43:28.183938 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0"} err="failed to get container status \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f75bf760536c8082d7737724e7750ec2da2afc2e0d3bac45754600caec569d0\": not found" Feb 13 19:43:28.183996 kubelet[1803]: I0213 19:43:28.183955 1803 scope.go:117] "RemoveContainer" containerID="058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa" Feb 13 19:43:28.184152 containerd[1501]: time="2025-02-13T19:43:28.184120745Z" level=error msg="ContainerStatus for \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\": not found" Feb 13 19:43:28.184317 kubelet[1803]: E0213 19:43:28.184288 1803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\": not found" containerID="058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa" Feb 13 19:43:28.184365 kubelet[1803]: I0213 19:43:28.184322 1803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa"} err="failed to get container status \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\": rpc error: code = NotFound desc = an error occurred when try to find container \"058e87bf6297eb273bebbd342fe1902aeff4a0702c36477ab135c2c767401dfa\": not found" Feb 13 19:43:28.350483 systemd[1]: var-lib-kubelet-pods-5d101c34\x2d7c09\x2d44b3\x2db835\x2df33a284d43df-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqpc62.mount: Deactivated successfully. Feb 13 19:43:28.350667 systemd[1]: var-lib-kubelet-pods-5d101c34\x2d7c09\x2d44b3\x2db835\x2df33a284d43df-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 19:43:28.350787 systemd[1]: var-lib-kubelet-pods-5d101c34\x2d7c09\x2d44b3\x2db835\x2df33a284d43df-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 19:43:28.640556 kubelet[1803]: E0213 19:43:28.640464 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:28.989547 kubelet[1803]: I0213 19:43:28.989490 1803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d101c34-7c09-44b3-b835-f33a284d43df" path="/var/lib/kubelet/pods/5d101c34-7c09-44b3-b835-f33a284d43df/volumes" Feb 13 19:43:29.002228 kubelet[1803]: E0213 19:43:29.002174 1803 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:43:29.641499 kubelet[1803]: E0213 19:43:29.641405 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:29.892994 kubelet[1803]: I0213 19:43:29.892808 1803 memory_manager.go:355] "RemoveStaleState removing state" podUID="5d101c34-7c09-44b3-b835-f33a284d43df" containerName="cilium-agent" Feb 13 19:43:29.900225 systemd[1]: Created slice kubepods-besteffort-pod651f966f_f866_443d_90ca_d6768df9b089.slice - libcontainer container kubepods-besteffort-pod651f966f_f866_443d_90ca_d6768df9b089.slice. Feb 13 19:43:29.918149 systemd[1]: Created slice kubepods-burstable-pod21ac59e3_6d80_41b6_a4fc_fb77c194716d.slice - libcontainer container kubepods-burstable-pod21ac59e3_6d80_41b6_a4fc_fb77c194716d.slice. Feb 13 19:43:30.025430 kubelet[1803]: I0213 19:43:30.025343 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/21ac59e3-6d80-41b6-a4fc-fb77c194716d-cilium-ipsec-secrets\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025430 kubelet[1803]: I0213 19:43:30.025391 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-etc-cni-netd\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025430 kubelet[1803]: I0213 19:43:30.025410 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-lib-modules\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025430 kubelet[1803]: I0213 19:43:30.025427 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-xtables-lock\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025430 kubelet[1803]: I0213 19:43:30.025451 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/21ac59e3-6d80-41b6-a4fc-fb77c194716d-hubble-tls\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025474 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n7x2\" (UniqueName: \"kubernetes.io/projected/21ac59e3-6d80-41b6-a4fc-fb77c194716d-kube-api-access-2n7x2\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025496 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-cilium-run\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025523 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21ac59e3-6d80-41b6-a4fc-fb77c194716d-cilium-config-path\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025621 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-bpf-maps\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025669 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-hostproc\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.025843 kubelet[1803]: I0213 19:43:30.025696 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-cilium-cgroup\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.026030 kubelet[1803]: I0213 19:43:30.025715 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/21ac59e3-6d80-41b6-a4fc-fb77c194716d-clustermesh-secrets\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.026030 kubelet[1803]: I0213 19:43:30.025741 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-host-proc-sys-kernel\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.026030 kubelet[1803]: I0213 19:43:30.025763 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-cni-path\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.026030 kubelet[1803]: I0213 19:43:30.025785 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/21ac59e3-6d80-41b6-a4fc-fb77c194716d-host-proc-sys-net\") pod \"cilium-9th9d\" (UID: \"21ac59e3-6d80-41b6-a4fc-fb77c194716d\") " pod="kube-system/cilium-9th9d" Feb 13 19:43:30.026030 kubelet[1803]: I0213 19:43:30.025808 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/651f966f-f866-443d-90ca-d6768df9b089-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4ww22\" (UID: \"651f966f-f866-443d-90ca-d6768df9b089\") " pod="kube-system/cilium-operator-6c4d7847fc-4ww22" Feb 13 19:43:30.026210 kubelet[1803]: I0213 19:43:30.025829 1803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f8ps\" (UniqueName: \"kubernetes.io/projected/651f966f-f866-443d-90ca-d6768df9b089-kube-api-access-9f8ps\") pod \"cilium-operator-6c4d7847fc-4ww22\" (UID: \"651f966f-f866-443d-90ca-d6768df9b089\") " pod="kube-system/cilium-operator-6c4d7847fc-4ww22" Feb 13 19:43:30.204063 kubelet[1803]: E0213 19:43:30.203843 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:30.204531 containerd[1501]: time="2025-02-13T19:43:30.204463025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ww22,Uid:651f966f-f866-443d-90ca-d6768df9b089,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:30.229861 containerd[1501]: time="2025-02-13T19:43:30.229742543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:30.229861 containerd[1501]: time="2025-02-13T19:43:30.229806614Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:30.229861 containerd[1501]: time="2025-02-13T19:43:30.229821982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:30.230110 containerd[1501]: time="2025-02-13T19:43:30.229923142Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:30.231684 kubelet[1803]: E0213 19:43:30.231525 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:30.234261 containerd[1501]: time="2025-02-13T19:43:30.232060575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9th9d,Uid:21ac59e3-6d80-41b6-a4fc-fb77c194716d,Namespace:kube-system,Attempt:0,}" Feb 13 19:43:30.257596 systemd[1]: Started cri-containerd-ccf049fc7760ac53bc28d55a7c332e07bda14c44669385132fc479e7c8737a9f.scope - libcontainer container ccf049fc7760ac53bc28d55a7c332e07bda14c44669385132fc479e7c8737a9f. Feb 13 19:43:30.265482 containerd[1501]: time="2025-02-13T19:43:30.265163011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:43:30.265482 containerd[1501]: time="2025-02-13T19:43:30.265248332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:43:30.265482 containerd[1501]: time="2025-02-13T19:43:30.265263921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:30.265482 containerd[1501]: time="2025-02-13T19:43:30.265360793Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:43:30.291586 systemd[1]: Started cri-containerd-397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447.scope - libcontainer container 397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447. Feb 13 19:43:30.308183 containerd[1501]: time="2025-02-13T19:43:30.308083049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4ww22,Uid:651f966f-f866-443d-90ca-d6768df9b089,Namespace:kube-system,Attempt:0,} returns sandbox id \"ccf049fc7760ac53bc28d55a7c332e07bda14c44669385132fc479e7c8737a9f\"" Feb 13 19:43:30.309075 kubelet[1803]: E0213 19:43:30.309034 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:30.310524 containerd[1501]: time="2025-02-13T19:43:30.310474798Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 19:43:30.323811 containerd[1501]: time="2025-02-13T19:43:30.323751925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9th9d,Uid:21ac59e3-6d80-41b6-a4fc-fb77c194716d,Namespace:kube-system,Attempt:0,} returns sandbox id \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\"" Feb 13 19:43:30.324474 kubelet[1803]: E0213 19:43:30.324447 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:30.326817 containerd[1501]: time="2025-02-13T19:43:30.326781763Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 19:43:30.345570 containerd[1501]: time="2025-02-13T19:43:30.345502709Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a\"" Feb 13 19:43:30.346026 containerd[1501]: time="2025-02-13T19:43:30.345979805Z" level=info msg="StartContainer for \"8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a\"" Feb 13 19:43:30.375402 systemd[1]: Started cri-containerd-8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a.scope - libcontainer container 8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a. Feb 13 19:43:30.387546 kubelet[1803]: I0213 19:43:30.387401 1803 setters.go:602] "Node became not ready" node="10.0.0.27" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T19:43:30Z","lastTransitionTime":"2025-02-13T19:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 19:43:30.413960 containerd[1501]: time="2025-02-13T19:43:30.413872024Z" level=info msg="StartContainer for \"8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a\" returns successfully" Feb 13 19:43:30.422915 systemd[1]: cri-containerd-8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a.scope: Deactivated successfully. Feb 13 19:43:30.467622 containerd[1501]: time="2025-02-13T19:43:30.467209409Z" level=info msg="shim disconnected" id=8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a namespace=k8s.io Feb 13 19:43:30.467622 containerd[1501]: time="2025-02-13T19:43:30.467302333Z" level=warning msg="cleaning up after shim disconnected" id=8d937ccb92729466c5b7bf03e4e0b0d50484d3809be17cb78edfb5047f2fb08a namespace=k8s.io Feb 13 19:43:30.467622 containerd[1501]: time="2025-02-13T19:43:30.467314897Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:30.642222 kubelet[1803]: E0213 19:43:30.642157 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:31.158778 kubelet[1803]: E0213 19:43:31.158718 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:31.160661 containerd[1501]: time="2025-02-13T19:43:31.160616929Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 19:43:31.502502 containerd[1501]: time="2025-02-13T19:43:31.502395624Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a\"" Feb 13 19:43:31.503218 containerd[1501]: time="2025-02-13T19:43:31.503172252Z" level=info msg="StartContainer for \"c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a\"" Feb 13 19:43:31.537410 systemd[1]: Started cri-containerd-c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a.scope - libcontainer container c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a. Feb 13 19:43:31.582213 systemd[1]: cri-containerd-c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a.scope: Deactivated successfully. Feb 13 19:43:31.596507 containerd[1501]: time="2025-02-13T19:43:31.596351170Z" level=info msg="StartContainer for \"c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a\" returns successfully" Feb 13 19:43:31.642674 kubelet[1803]: E0213 19:43:31.642637 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:31.687082 containerd[1501]: time="2025-02-13T19:43:31.687006379Z" level=info msg="shim disconnected" id=c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a namespace=k8s.io Feb 13 19:43:31.687082 containerd[1501]: time="2025-02-13T19:43:31.687079286Z" level=warning msg="cleaning up after shim disconnected" id=c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a namespace=k8s.io Feb 13 19:43:31.687269 containerd[1501]: time="2025-02-13T19:43:31.687092061Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:32.132643 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c771cc570ee0a3d6182b339d199c86514e2e4773ce0c71028bcc8d73089e2f3a-rootfs.mount: Deactivated successfully. Feb 13 19:43:32.164830 kubelet[1803]: E0213 19:43:32.164783 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:32.166850 containerd[1501]: time="2025-02-13T19:43:32.166798956Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 19:43:32.642916 kubelet[1803]: E0213 19:43:32.642788 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:33.236408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393520224.mount: Deactivated successfully. Feb 13 19:43:33.642849 containerd[1501]: time="2025-02-13T19:43:33.642745388Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633\"" Feb 13 19:43:33.643569 kubelet[1803]: E0213 19:43:33.643140 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:33.643882 containerd[1501]: time="2025-02-13T19:43:33.643623517Z" level=info msg="StartContainer for \"0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633\"" Feb 13 19:43:33.658312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1340301684.mount: Deactivated successfully. Feb 13 19:43:33.687532 systemd[1]: Started cri-containerd-0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633.scope - libcontainer container 0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633. Feb 13 19:43:33.725210 systemd[1]: cri-containerd-0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633.scope: Deactivated successfully. Feb 13 19:43:33.828893 containerd[1501]: time="2025-02-13T19:43:33.828784275Z" level=info msg="StartContainer for \"0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633\" returns successfully" Feb 13 19:43:33.876663 containerd[1501]: time="2025-02-13T19:43:33.876559767Z" level=info msg="shim disconnected" id=0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633 namespace=k8s.io Feb 13 19:43:33.876663 containerd[1501]: time="2025-02-13T19:43:33.876633726Z" level=warning msg="cleaning up after shim disconnected" id=0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633 namespace=k8s.io Feb 13 19:43:33.876663 containerd[1501]: time="2025-02-13T19:43:33.876645688Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:34.002952 kubelet[1803]: E0213 19:43:34.002899 1803 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 19:43:34.170332 kubelet[1803]: E0213 19:43:34.170292 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:34.172163 containerd[1501]: time="2025-02-13T19:43:34.172118507Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 19:43:34.232612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0626438740aad010cd7e02bee5065d490daa492e86421890b464d0947bb52633-rootfs.mount: Deactivated successfully. Feb 13 19:43:34.644179 kubelet[1803]: E0213 19:43:34.644112 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:34.803836 containerd[1501]: time="2025-02-13T19:43:34.803729374Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7\"" Feb 13 19:43:34.804494 containerd[1501]: time="2025-02-13T19:43:34.804463411Z" level=info msg="StartContainer for \"df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7\"" Feb 13 19:43:34.835440 containerd[1501]: time="2025-02-13T19:43:34.835388492Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:34.837279 containerd[1501]: time="2025-02-13T19:43:34.837220460Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Feb 13 19:43:34.839753 containerd[1501]: time="2025-02-13T19:43:34.839329509Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:34.839515 systemd[1]: Started cri-containerd-df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7.scope - libcontainer container df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7. Feb 13 19:43:34.842121 containerd[1501]: time="2025-02-13T19:43:34.841630617Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 4.531097499s" Feb 13 19:43:34.842121 containerd[1501]: time="2025-02-13T19:43:34.841696612Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 13 19:43:34.846334 containerd[1501]: time="2025-02-13T19:43:34.846190556Z" level=info msg="CreateContainer within sandbox \"ccf049fc7760ac53bc28d55a7c332e07bda14c44669385132fc479e7c8737a9f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 19:43:34.873593 containerd[1501]: time="2025-02-13T19:43:34.873524437Z" level=info msg="CreateContainer within sandbox \"ccf049fc7760ac53bc28d55a7c332e07bda14c44669385132fc479e7c8737a9f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"40b6fc1a65efeea883bda3e7360612e2f8f35fa6046d80ab014631ea1e7c60b2\"" Feb 13 19:43:34.874251 containerd[1501]: time="2025-02-13T19:43:34.874210274Z" level=info msg="StartContainer for \"40b6fc1a65efeea883bda3e7360612e2f8f35fa6046d80ab014631ea1e7c60b2\"" Feb 13 19:43:34.877642 systemd[1]: cri-containerd-df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7.scope: Deactivated successfully. Feb 13 19:43:34.880928 containerd[1501]: time="2025-02-13T19:43:34.880896323Z" level=info msg="StartContainer for \"df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7\" returns successfully" Feb 13 19:43:34.910426 systemd[1]: Started cri-containerd-40b6fc1a65efeea883bda3e7360612e2f8f35fa6046d80ab014631ea1e7c60b2.scope - libcontainer container 40b6fc1a65efeea883bda3e7360612e2f8f35fa6046d80ab014631ea1e7c60b2. Feb 13 19:43:34.961530 containerd[1501]: time="2025-02-13T19:43:34.961419251Z" level=info msg="StartContainer for \"40b6fc1a65efeea883bda3e7360612e2f8f35fa6046d80ab014631ea1e7c60b2\" returns successfully" Feb 13 19:43:34.963911 containerd[1501]: time="2025-02-13T19:43:34.962510368Z" level=info msg="shim disconnected" id=df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7 namespace=k8s.io Feb 13 19:43:34.963911 containerd[1501]: time="2025-02-13T19:43:34.962589106Z" level=warning msg="cleaning up after shim disconnected" id=df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7 namespace=k8s.io Feb 13 19:43:34.963911 containerd[1501]: time="2025-02-13T19:43:34.962599926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:43:35.174952 kubelet[1803]: E0213 19:43:35.174798 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:35.176128 kubelet[1803]: E0213 19:43:35.176092 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:35.176622 containerd[1501]: time="2025-02-13T19:43:35.176585622Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 19:43:35.203350 kubelet[1803]: I0213 19:43:35.203220 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4ww22" podStartSLOduration=1.670360449 podStartE2EDuration="6.203190872s" podCreationTimestamp="2025-02-13 19:43:29 +0000 UTC" firstStartedPulling="2025-02-13 19:43:30.310043839 +0000 UTC m=+72.139024905" lastFinishedPulling="2025-02-13 19:43:34.842874262 +0000 UTC m=+76.671855328" observedRunningTime="2025-02-13 19:43:35.202996337 +0000 UTC m=+77.031977403" watchObservedRunningTime="2025-02-13 19:43:35.203190872 +0000 UTC m=+77.032171949" Feb 13 19:43:35.219833 containerd[1501]: time="2025-02-13T19:43:35.219756331Z" level=info msg="CreateContainer within sandbox \"397a7858a53ab7af23e4ee131794486642a0da25a17ab3786d0d1d816f363447\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559\"" Feb 13 19:43:35.220505 containerd[1501]: time="2025-02-13T19:43:35.220461405Z" level=info msg="StartContainer for \"d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559\"" Feb 13 19:43:35.234485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df6ada77f923cafccd67f1a0d4ea2637e18e55b991bb058a6856e2c02d31f5b7-rootfs.mount: Deactivated successfully. Feb 13 19:43:35.261473 systemd[1]: Started cri-containerd-d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559.scope - libcontainer container d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559. Feb 13 19:43:35.294266 containerd[1501]: time="2025-02-13T19:43:35.294204439Z" level=info msg="StartContainer for \"d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559\" returns successfully" Feb 13 19:43:35.645017 kubelet[1803]: E0213 19:43:35.644957 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:35.745279 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 13 19:43:36.182112 kubelet[1803]: E0213 19:43:36.182042 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:36.182112 kubelet[1803]: E0213 19:43:36.182092 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:36.355167 kubelet[1803]: I0213 19:43:36.355085 1803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9th9d" podStartSLOduration=7.355063303 podStartE2EDuration="7.355063303s" podCreationTimestamp="2025-02-13 19:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:43:36.354898374 +0000 UTC m=+78.183879450" watchObservedRunningTime="2025-02-13 19:43:36.355063303 +0000 UTC m=+78.184044369" Feb 13 19:43:36.645380 kubelet[1803]: E0213 19:43:36.645304 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:37.183924 kubelet[1803]: E0213 19:43:37.183883 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:37.646457 kubelet[1803]: E0213 19:43:37.646404 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:38.590809 kubelet[1803]: E0213 19:43:38.590745 1803 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:38.647114 kubelet[1803]: E0213 19:43:38.647022 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:39.504798 systemd-networkd[1417]: lxc_health: Link UP Feb 13 19:43:39.518365 systemd-networkd[1417]: lxc_health: Gained carrier Feb 13 19:43:39.647976 kubelet[1803]: E0213 19:43:39.647885 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:40.233754 kubelet[1803]: E0213 19:43:40.233693 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:40.648301 kubelet[1803]: E0213 19:43:40.648209 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:40.949573 systemd-networkd[1417]: lxc_health: Gained IPv6LL Feb 13 19:43:41.194506 kubelet[1803]: E0213 19:43:41.194446 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:41.292337 systemd[1]: run-containerd-runc-k8s.io-d778976b1a990d689a683560f0b6fae8f67be52cbe24aee4588110ad74231559-runc.14guYY.mount: Deactivated successfully. Feb 13 19:43:41.648635 kubelet[1803]: E0213 19:43:41.648431 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:42.196494 kubelet[1803]: E0213 19:43:42.196452 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:42.649267 kubelet[1803]: E0213 19:43:42.649127 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:43.649966 kubelet[1803]: E0213 19:43:43.649895 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:43.986636 kubelet[1803]: E0213 19:43:43.986465 1803 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:44.650150 kubelet[1803]: E0213 19:43:44.650075 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:45.650573 kubelet[1803]: E0213 19:43:45.650511 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:46.651041 kubelet[1803]: E0213 19:43:46.650956 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:43:47.651890 kubelet[1803]: E0213 19:43:47.651812 1803 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"